Google logo

On July 25th, Reuters reported that Alphabet, Google’s parent company, exceeded capital market expectations in the second quarter. According to Refinitiv data, the quarter’s revenue was $74.6 billion, exceeding the expected $72.82 billion, and the net profit per share was $1.44, exceeding the expected $1.34 per share. In the financial report, the “Other Revenue” category, which includes the AI business, was $8.142 billion, a year-on-year increase of over 24.2%, far exceeding the overall revenue growth rate.

At this point in time, let’s analyze Google’s development history in the field of artificial intelligence. Artificial intelligence is one of the hotspots in the current world of technological development, with increasingly widespread applications that are profoundly changing our ways of life and work. As one of the most important technology companies in the world, Google has always been at the forefront of artificial intelligence technology and has made significant breakthroughs and achievements in this field.

Since its inception, Google has always emphasized the combination of business and technology, and its technological capabilities have provided a strong impetus for its business development. Google’s co-founders Larry Page and Sergey Brin publicly stated 23 years ago that artificial intelligence would transform the company: “An ideal search engine is smart, it must understand your query and it must understand all the documents, and that requires AI.”

In 2016, Google announced a shift from a “Mobile First” to an “AI First” strategy, with AI gradually becoming the most important part of Google’s strategic map. Over the years, Google’s AI layouts have developed from the Google Brain team to multiple artificial intelligence laboratories such as Google AI and Google DeepMind. The achievements have been applied to various aspects of Google’s products, and this year, large models have been launched.

After four years, Google co-founder Sergey Brin returned to the company to participate in the development of Google’s new generation of AI system Gemini. Gemini is said to combine some of the advantages of the AlphaGo system and the amazing language ability of large models, and will be a more powerful language model than chatGPT. This article will introduce important milestones, technological applications, and impacts in the history of Google’s AI development. Through research and exploration of Google’s AI technology, this article aims to deepen our understanding of the development process and future trends of AI technology, and to explore its impact and significance in society, economy, and technology.

Google Brain

In 2011, Google established the Google Brain team, a research team dedicated to artificial intelligence. It evolved from a project of Google X. The original intention of the team’s name was to make artificial intelligence think and operate like the human brain. The reason why the establishment of Google Brain is considered the first milestone in Google’s AI development is that it laid a solid foundation for many of Google’s subsequent major moves in the field of artificial intelligence. Google Brain was co-founded by Google researchers Jeff Dean and Greg Corrado, and Stanford University professor Andrew Ng. In 2013, Geoffrey Hinton, a top researcher in the field of deep learning, also joined the team.

At the beginning of its establishment, Google Brain sought to develop intelligent systems that could learn independently from massive amounts of data. In 2012, Google Brain researchers used millions of images on YouTube to train pattern recognition algorithms and successfully improved the recognition rate of cats in experiments. This marked a major breakthrough in pattern recognition research and drove the development of this field for many years to come.

Google Brain has made groundbreaking achievements in many fields such as machine translation, image recognition, image enhancement, machine learning security, and robotics. It also open-sourced an important framework in the history of AI, TensorFlow, which has lowered the threshold for AI research and development and greatly promoted the development of AI. These research results have provided an important foundation for the development of AI technology and promoted the widespread application of deep learning algorithms in academia and industry.

In addition to academic research, the Google Brain project has also provided support for the development of other Google products. Products such as Google Image Search and Google Voice Search use deep learning algorithms from the Google Brain project, greatly improving search results. In addition, Google Brain’s achievements have been applied in the Android operating system’s voice recognition system, Google Photos’ photo search, intelligent reply in Gmail, and video recommendations on YouTube.

In April 2023, the Google Brain team was integrated into the DeepMind team to form the new Google DeepMind.

Interestingly, since 2011, Google’s stock price has been skyrocketing, from around $10 in 2011 to over $150 at its peak.


TensorFlow originated from the DistBelief project created by Google Brain in 2011. Although it was not open-sourced, DistBelief was used in Google’s internal research and product development. The name TensorFlow comes from the operations performed by neural networks on multidimensional arrays of data, called tensors.

In November 2015, Google officially released the TensorFlow white paper and open-sourced TensorFlow 0.1. The initial version could only run on a single machine. In April 2016, Google released version 0.8 that could run on distributed systems. Google claimed that using 100 GPUs, TensorFlow achieved a 78% accuracy rate in image classification tasks in less than 65 hours of training time. In the fierce commercial competition, distributed TensorFlow means that it can truly enter the AI industry on a large scale. In June 2016, Google released version 0.9 that could support multiple platforms. It was the support for multiple platforms that enabled TensorFlow to widen the distance between it and other deep learning frameworks.

On February 11, 2017, Google released the stable version of TensorFlow 1.0.0. TensorFlow has become the most popular open-source project on GitHub, establishing its leading position in the field of deep learning.

In September 2019, to address the competitive advantage of PyTorch, the TensorFlow team released version 2.0, introducing many changes, the most important of which was TensorFlow eager. It changed the automatic differentiation scheme from a static computation graph to the “Define-by-Run” scheme that was initially popularized by Chainer and later by PyTorch.

Despite ups and downs, the birth and development of TensorFlow have greatly promoted the progress of deep learning research and made deep learning valuable in the industry.


In May 2016, the first-generation TPU (Tensor Processing Unit) chip was released at the Google I/O conference. It is an AI processing chip designed specifically to accelerate the computational capabilities of deep neural networks in the TensorFlow framework. It is a customized ASIC chip that has higher computational efficiency compared to general-purpose CPU and GPU chips. It can provide 15-30 times performance improvement and 30-80 times efficiency (performance/watt) improvement compared to CPUs and GPUs of the same period.

The first generation TPU helped AlphaGo defeat Lee Sedol in the world-famous man vs. machine Go match, and gained fame overnight. TPU went through four iterations in five years, with the latest update being the release of the fourth version in 2021. Currently, the TPU V4 Pod has a computing power of 1 EFLOPS.

Currently, TPUs are not sold to the public and are only available for use in Google Cloud. In the history of AI chip development, Google TPU is a groundbreaking technological innovation that not only breaks the monopoly of GPUs but also creates a new competition pattern for cloud-based AI chips.

Vertex AI

On May 19, 2021, Vertex AI was released at the Google I/O conference. It is a machine learning platform on Google Cloud that allows developers to accelerate the deployment and maintenance of AI models. Compared to its competitors at the time, it can reduce 80% of the code.

On March 14, 2023, Google released the PaLM API, which can be built on Google Cloud and MakerSuite based on the PaLM model. Makersuite is a low-code model tool on Google Cloud. Through MakerSuite, developers can quickly test and iterate models in the browser, easily iterate prompts, use synthetic data to expand the dataset, and other previously cumbersome and high-threshold steps in AI development. They can adjust the model and generate code for specific languages and frameworks (such as Python and Node.js) for embedded applications according to their needs.

On March 15, 2023, Google upgraded Vertex AI. On the one hand, it can select and call the latest basic models from Google and DeepMind, as well as support open-source and third-party models. On the other hand, it launched the Generative AI App Builder, which allows developers to quickly develop applications such as robots, chat interfaces, custom search engines, and digital assistants. Developers can access Google’s basic models through APIs and quickly create applications by using templates within minutes or hours. The Al App Builder is still in beta and requires contact with the sales team to obtain trusted tester status.


DeepMind is an artificial intelligence startup founded by Hassabis and Legg in 2010. In January 2014, it wasacquired by Google and became a subsidiary of the company. In 2015, it became a subsidiary of Alphabet. In April 2023, it merged with Google Brain to form Google DeepMind.

Looking back, Google’s acquisition of DeepMind brought much more than just financial benefits. It provided Google and Alphabet with a strategic advantage in the global AI race. Firstly, it helped to attract AI talent, causing competitors such as Facebook, Microsoft, and Amazon to lose some of their advantages in research manpower. Secondly, it enabled Google to integrate its business deeply with AI. As described by DeepMind CEO Hassabis, it combined “the long-term vision of academia with the energy and focus of a tech start-up.” The achievements of DeepMind over the past decade are sure to go down in the history of human AI development.

AlphaGo Man vs. Machine

In March 2016, a historic man vs. machine match took place in Seoul, South Korea, between the intelligent Go program AlphaGo, developed by DeepMind, and Korean Go champion Lee Sedol. AlphaGo ultimately defeated Lee Sedol with a high score of 4:1. In May 2017, AlphaGo went on to defeat Chinese Go champion Ke Jie with a score of 3:0. This achievement marked the point where artificial intelligence surpassed human level in certain seemingly human intelligence-requiring fields. For a while, people were concerned that AI would completely exceed and even dominate humanity. Of course, now we see that such concerns were a bit excessive.


In 2018, DeepMind’s protein folding program AlphaFold successfully predicted the most accurate structure of 25 out of 43 proteins, winning the 13th Critical Assessment of Structure Prediction (CASP13). In September 2022, Demis Hassabis and John Jumper won the Breakthrough Prize in life sciences for AlphaFold’s ability to rapidly and accurately predict the three-dimensional structure of proteins.

AI-controlled nuclear fusion

DeepMind is one of the earliest research institutions in the world to apply artificial intelligence to scientific research (“AI for Science”). In recent years, it has achieved many remarkable accomplishments, successfully leaving its mark on fields such as biology, chemistry, mathematics, and physics simulations. It has attracted many scholars to devote themselves to research work in the “AI for Science” direction.

In 2022, DeepMind announced a secret three-year research project on a deep reinforcement learning system that can maintain stable nuclear fusion plasma within a tokamak device. This opens up a new path to advance nuclear fusion research, which could affect the design of future tokamaks and even accelerate the development of viable fusion reactors. The work has been published in Nature.

Just today, after a year, DeepMind has made a new breakthrough. In the latest experimental simulation, the precision of the plasma shape has been increased by 65%.

By user

Leave a Reply

Your email address will not be published. Required fields are marked *