While computers became more common around 70 years ago, it would have been impossible for us to imagine that individuals would have a “second life” in the digital world as our society became completely digitized.
Technical avatars are continually pushing the boundaries of recognizability, new things are constantly being tested and have improved our physical living conditions in the vast realm of online “parallel universes”, and we have also left our mark in this world, which is “the data”. But what data means? There is no definitive answer to this fundamental but complex question in information science. Simply put, data is the product of observations and observational objectives, including independent individuals, organizations, events and their environment.
These observations are made based on a range of perspectives, methods and tools, along with their respective symbology, such as weights and measures.
Data is generated from recording the characteristics and behavior of observed objects using symbology. Data can exist in many different forms, including letters, numbers, graphics, audio, video, etc.
Although data may exist in digital or non-digital form (e.g., when recorded on paper), it is increasingly digitized with advances in Information and Communication Technology (ICT).
According to statistics from Statista, connected devices (e.g., smart TVs, smart speakers, toys, and smart guest devices) worldwide are expected to reach 30.9 billion devices by 2025. It will be a huge source of data generated by connected devices and services.
IDC predicts that by 2025, the global circle of data will expand to 163 ZB (1 ZB equals 1,000 billion GB), which is equivalent to 16.1 times the 2016 ZB of data generated in 2016.
The question is, how do you unlock the value of data on such a large scale? And Artificial Intelligence showed us the answer.
The 60-year journey of Artificial Intelligence (AI) development
In 1956, during a six-month conference at Dartmouth College that summer, a group of young scientists, including Minsky, coined the term “Artificial Intelligence.”
In 2006, Hinton proposed Deep Learning, a concept of artificial neural networks, and this concept enables breakthroughs in AI performance. The current wave of AI breakthroughs is very different from that of 1956 and 2006.
Machine Learning algorithms based on Big Data and powerful computing power have made breakthroughs in many fields, such as computer vision, speech recognition and natural language processing.
Meanwhile, AI-powered applications have also begun to grow, leading to the realization of truly “intelligent” AIs.
Nowadays, most people have become familiar with Artificial Intelligence. This technology is now part of our daily lives, from online shopping to industrial production. We see everywhere the convenience and advancements brought by Artificial Intelligence.
The advancement of related theories and technologies has broadened the scope of AI applications, resulting in increasing commercialization. Around the world, a growing number of governments and businesses are realizing the importance of AI and are beginning to use the technology in national policies and businesses.
A decade ago, the rise of the mobile Internet pushed Artificial Intelligence toward the “singularity” (a hypothesis that at some point machines will overtake humans) of explosive growth.
With the improvement of products and services at an increasingly rapid pace, mobile device vendors such as Apple, Samsung and mobile Internet service providers such as Alibaba, Tencent, Meta and Google have made progress. mobile Internet of the existing spatial and temporal ruptures of the conventional space. Internet PC.
In addition to the increasing convenience of human-computer interaction, the mobile Internet has created breakthroughs in AI technology, expanding the capabilities of natural language processing, machine learning, vision algorithms.
Deloitte’s 2019 White Paper on Global AI Development predicts that the global AI market is expected to exceed $6.2025 billion by 30 at a compound growth rate of 2017% from 2025 to 2025.
A research report published by PwC shows that global GDP will increase by 14% by 2030 due to the adoption of AI, bringing an additional $15.7 billion to the global economy. This is more than the current production of China and India combined. Over the next few years, the global AI market will grow exponentially.
Over the past 60 years, the field of AI has flourished. As we enter the fourth industrial revolution: the technological revolution, advances in AI are becoming more apparent than ever.
For AI to become the central and multivariate technology of the coming technological reform, three key factors are essential: data, algorithms and computing power.
With the explosion of the Internet and the widespread adoption of mobile Internet has led to the incredible growth of global data. This valuable real data provides the “manufacturing material” of AI.
Meanwhile, improvements in chip processing power, wide-scale cloud adoption and falling hardware prices have caused a boom in global computing capacity. Computing power has provided AI with a valuable “production engine”.
With the revolutionary achievements in Machine Learning, artificial neural networks and computer vision, as well as the sheer size of the industry and solutions market, AI algorithms have seen rapid development.
AI is now being applied in various fields, including medical care, healthcare, finance, education, security, and more. Algorithms have provided a powerful “production tool” of Artificial Intelligence.
With these three growth drivers, Artificial Intelligence has ushered in the “golden decade” of rapid growth but is also gradually revealing the challenges facing AI.
The first challenge is pressure on data governance and privacy. In 2018, the European Union introduced the General Data Protection Regulation (GDPR). In 2021, China’s Data Security Law and Personal Information Protection Law also officially entered into force.
Specifically, these laws will focus on individual rights and interests, aiming to protect individual Chinese citizens with respect to their interests in terms of privacy, dignity, property, etc.
The law defines “personal information” as all types of information about individuals, whether electronically recorded or otherwise documented. Strict privacy and personal data rules effectively prevent misuse of data.
Moreover, pressure on data privacy also comes from within. Companies that own data face a huge dilemma. While sharing data and interacting with other companies clearly improves the performance of AI algorithms, they also need to ensure that their data is not leaked.
Therefore, strict compliance measures must be observed in collaborative activities between data-related departments, as well as in data cooperation with third-party partners. When working on projects involving data collaboration, security of data flow is often a primary concern.
Additionally, training AI models is expensive. While advancements in hardware and software have led to a 37% year-over-year decline in AI training costs, the total cost of AI training continues to rise as the scale of AI models are growing at an even faster rate than we imagined. (10 times a year).
Some organizations estimate that the cost of training current AI models could increase 100 times by 2025, from around $1 million today to over $100 million.
Faced with challenges such as data privacy, high costs, and technology concentration, how should Artificial Intelligence break down barriers and create new breakthroughs?
The research and application of some cutting-edge technologies have helped pave the way for the future development of AI.
AI for all
The emergence of blockchain and privacy security has truly inspired AI.
The intelligent arrangement of data has catalyzed the interplay between blockchains, privacy and AI in different ways. When these technologies come together, data usage reaches new heights, along with stronger blockchain infrastructure and greater potential for AI.
Blockchain consensus algorithms support the execution of support tasks in AI systems. At the same time, its technical characteristics allow data linking. In this way, it can encourage the addition of more data types, algorithms, and computing power to create more efficient AI models.
When data privacy applications are needed, privacy-preserving computing allows data providers to analyze and calculate data without disclosing the original data. Data is “available but invisible” during circulation and integration, thereby adhering to privacy and security controls, and facilitating data sharing and value exchange.
There are many platforms built on blockchain and privacy computing in the market, such as AntChain Morse MPC platform and Baidu MesaTEE platform.
However, most of these platforms are aimed at businesses. The reason is simple: the exchange of data between companies is the most fundamental business need, solving the fundamental conflicts between companies regarding data sharing, interaction and improvement of AI algorithms. Very little effort is invested in the democratization of AI and the establishment of Artificial General Intelligence (AGI).
Enterprise services are just the beginning of what AI has been able to achieve so far. In the near future, data ownership will eventually be returned to individuals. As a result, technology, materials and production tools will be transferred and returned to individuals.
The only path to AGI is to focus on “next-generation inputs” with technical underpinnings such as AI, blockchain, and privacy-preserving computing, while promoting urgency and development of advanced AI.
The future “singularity” is the leap of the data economy, which may have welcomed its own flowering period.