
Google Unveils Search Generative Experience (SGE)
In May 2023, Google unveiled a groundbreaking update to its search engine – the Search Generative Experience (SGE). This AI-powered search aims to revolutionize how users interact with and obtain information online. By leveraging generative AI, SGE provides users with quick, contextual answers to complex queries, going beyond the traditional list of links.
SGE uses AI in three key ways:
- AI Snapshot: An AI-generated snippet at the top of the search results page that directly answers the user’s question.
- Conversational Mode: Allows users to ask follow-up questions related to the original search, maintaining context.
- Vertical Experiences: Offers tailored features and product details for commercial searches.
How SGE Works: Web Scraping and AI
To generate these AI-powered answers, Google’s SGE relies on web scraping – the process of extracting data from websites. Google’s AI algorithms analyze and understand webpage content and identify patterns in user behavior to provide more accurate and personalized search results.
Google recently updated its privacy policy to disclose that its AI services, including Bard and Cloud AI, may be trained on public data scraped from the web. While Google claims this has long been transparent in its policy for services like Google Translate, the update clarifies that newer services like Bard(now Gemini) are also included.
The AI models powering SGE learn through observation and pattern matching, a process known as training. By analyzing millions of webpages, the AI can generate answers even to queries it hasn’t encountered before. However, this reliance on web-scraped data raises concerns about the accuracy and reliability of the information provided.
The Potential Pitfalls of AI-Generated Search
One of the main issues with AI-generated search is the potential for incorrect or misleading answers, known as “AI hallucinations.” These errors can occur due to insufficient training data, biases in the data, or incorrect assumptions made by the AI model.
A prime example of this issue was seen in the first demo of Google’s AI chatbot, Bard. When asked about discoveries from the James Webb Space Telescope, Bard confidently stated that the telescope took the first pictures of an exoplanet, which is incorrect. The first exoplanet image was actually taken in 2004, as pointed out by several astronomers on Twitter.
This incident highlights the tendency of AI chatbots to state incorrect information as fact confidently. As Google and other search engines increasingly rely on AI-generated answers, there is a risk that users may be presented with inaccurate or misleading information without realizing it.
The Monopolistic Grip of AI-Powered Search
As the dominant player in the search engine market, Google’s move towards AI-generated answers raises concerns about its potential monopolistic grip on information. With SGE, Google becomes not just a provider of links to relevant websites but an arbiter of truth, presenting definitive answers to user queries.
This shift in power dynamics could have far-reaching implications. If users increasingly rely on Google’s AI-generated answers without verifying the information from original sources, the search giant could effectively control what is considered factual and shape public opinion on various topics.
Moreover, as Google’s AI continues to scrape data from websites to train its models, it may starve those very websites of traffic. If users can find the information they need directly in Google’s AI-generated answers, they may have less reason to click through to the original sources. This could have a devastating impact on publishers and content creators who rely on search engine traffic for revenue.
Safeguards and Responsible AI Development
Recognizing the potential risks associated with AI-generated search, Google and other AI experts are working to implement safeguards and promote responsible AI development.
Google claims to adhere to its AI Principles, which emphasize developing AI that is socially beneficial, avoids creating unfair bias, is built and tested for safety, and upholds high standards of scientific excellence. The company also states that it uses human feedback and reviews to evaluate and improve the quality of its AI products.
To prevent AI hallucinations, experts suggest several strategies:
- Limiting possible outcomes through regularization techniques.
- Training AI models with only relevant and specific data sources.
- Creating templates for the AI to follow.
- Providing feedback to the AI on desired and undesired outputs.
Google also encourages users to think critically about the responses they receive from generative AI tools and to use other resources to verify information presented as fact. The company provides reporting tools for users to flag incorrect or problematic outputs, which helps refine the AI models over time.
The Future of Search: Balancing Innovation and Responsibility
As Google and other tech giants continue to push the boundaries of AI in search, it is crucial to strike a balance between innovation and responsibility. While AI-generated answers have the potential to revolutionize how we access and interact with information, we must remain vigilant about the accuracy and reliability of the information provided.
Users should be encouraged to think critically, fact-check AI-generated answers, and consult original sources when necessary. Search engines must prioritize transparency about their use of AI and web-scraped data, and implement robust safeguards against biases and inaccuracies.
Regulators and policymakers also have a role to play in ensuring that the development and deployment of AI in search adheres to ethical principles and does not lead to monopolistic control over information. Striking the right balance will require ongoing collaboration between tech companies, AI experts, policymakers, and the public.
As we navigate this new era of AI-powered search, it is essential to embrace the potential benefits while remaining mindful of the risks and challenges. By doing so, we can harness the power of AI to enhance our access to knowledge while preserving the integrity of the information we rely on to make informed decisions.
The launch of Google’s Search Generative Experience marks a significant milestone in the evolution of search engines. As we witness this technological leap forward, it is up to all of us – users, developers, and regulators alike – to ensure that AI-generated search serves as a tool for empowerment rather than a source of misinformation or monopolistic control. Only by approaching this innovation with a critical eye and a commitment to responsible development can we truly unlock the potential of AI to benefit society as a whole.