Generative AI Red Flags in Big Tech

It’s estimated that more than half of all Americans have used generative AI in one form or another over the past year. It’s no surprise, then, that more major tech companies are investing more in developing their own or choosing to integrate existing programs into their platforms so they aren’t left behind as the technology continues to evolve.

Let’s examine two recent instances of big tech companies adopting generative AI, including how these programs are designed to work and what—if any—potential risks and long-term implications are associated with integrating them into existing technological ecosystems that have become part of consumers’ daily lives.

Google Gemini and Apple Intelligence

Recently, Apple and Google—both of which ranked in the top 10 of the 2024 Fortune 500 List with a combined annual revenue of $670 billion—have either launched or announced an upcoming integration of generative AI programs within their respective operating systems.

Google Gemini

Previously known as Bard, Google Gemini is a large language model (LLM) that serves as both a chatbot that can understand prompts to generate multiple types of content and the underlying AI model that can integrate with other Google devices and services.

Since launching in December of 2023, Gemini has already faced public scrutiny for multiple stumbles:

Apple Intelligence

Announced at the Worldwide Developers Conference in June of 2024, Apple Intelligence is purported to combine user data with generative AI (backed by integrating with OpenAI’s popular ChatGPT platform) to provide more personalized and unique interactions. The launch is slated for late 2024 on newer Apple devices and operating systems.

Historically, ChatGPT has also had its fair share of issues:

  • Operating off of outdated information due to an inability to reference real-time data
  • Generating inconsistent responses to similar prompts
  • Difficulty maintaining context over longer interactions

Potential Risks of GenAI in Big Tech

As with any new technology, the potential risks associated with a more widespread adoption of generative AI by companies like Apple and Google can fluctuate daily, especially as it pertains to if the proper steps aren’t taken to ensure user protections while working within the current limitations of the technology.

Consumer Privacy and Data Collection

An estimated 82 percent of consumers have expressed concerns about how companies collect and use their data. As a result, more than 90 percent of security and privacy professionals have recognized the need to address those concerns when it comes to the use of consumer data in conjunction with AI.

To protect their data, Google has suggested that users not share confidential or sensitive information when interacting with Gemini’s chatbot feature, as human developers are constantly reviewing, annotating, and processing those conversations to improve the program’s functionality. These conversations can be retained for up to three years.

Despite assurances that Apple Intelligence will leverage its new Private Cloud Compute system that respects user privacy and does not collect personal information, some users remain skeptical, believing that the combination of measures taken to protect user data and the technology needed to run ChatGPT may result in limited functionality, confusion, and dissatisfaction.

Operating within GenAI’s Current Capabilities

Outside of data privacy, it’s important to remember that most generative AI is still a relatively new technology. As such, there are still some areas of its functionality that have raised a number of red flags since its rise in popularity, such as the following:

  • Limited developmental transparency and accountability, which may increase liability risks
  • The use of limited or inaccurate training data, which may result in biased or even discriminatory results
  • Inaccurate or outright false results presented as fact (also known as hallucinations), which are generally the result of limited datasets, misinterpretation, or poor system training

What More GenAI Integration Means for You

While the long-term effects of integrating generative AI into systems that are part of our day-to-day life can’t be fully realized until they have had time to be adopted by users and businesses, if you look at how they are designed, both Google Gemini and Apple Intelligence demonstrate similar potential:

  • Improved search functionality with personalized results
  • Enhanced virtual assistants
  • New opportunities for content and app development
  • Better detection of scams and fraudulent activity
  • Evolving e-commerce and digital advertising

Choosing the right generative AI tools can be what makes or breaks your business. Fullcast offers a series of trusted products backed by AI and designed to make the technology work for you. Contact us today to learn more about our latest offerings, including Fullcast Scenarios, Copilot for RevOps, and Datajoin.

Imagen del Autor


Fullcast was built for RevOps leaders by RevOps leaders with a goal of bringing together all of the moving pieces of our clients’ sales go-to-market strategies and automating their execution.