A recent study conducted by economists has pointed out a startling trend in the distribution of artificial intelligence (AI) technologies across the country. The study, based on government data from a 2018 survey of 474,000 companies (representative of around four million businesses in the US), found that less than 6% of the businesses were employing AI technologies at that time. However, the adoption rate among larger businesses, with more than 5,000 employees, was above 18%.
The use of AI technologies has risen considerably in the last five years, with a McKinsey study indicating that 79% of the participants had some exposure to generative AI either at work or outside. A significant 22% admitted to using AI technology regularly. While the study did not focus extensively on the financial outcomes of such adoption, it did suggest a positive correlation between AI adoption and revenue growth.
Distribution of AI Uses Causes Concern
The economists, however, expressed concern over the unequal spread of the AI economy. The study indicates that AI technologies are predominantly being adopted in what the authors call “AI hubs” across the world. It states, “the potential for an ‘AI divide' across regions and cities is attracting concern.”
The analysis shows that the use of AI technologies, namely machine learning, autonomous vehicles, machine vision, voice recognition, and natural language processing, was mainly centered in California's Silicon Valley and San Francisco Bay Area. Other cities with significant adoption included Nashville, San Antonio, Las Vegas, New Orleans, San Diego, and Tampa, and to a lesser extent, Riverside, Louisville, Columbus, Austin, and Atlanta.
Potential Upsides and Downsides of AI Adoption
Kristina McElheran, one of the authors of the study and an associate professor of strategic management at Canada's University of Toronto Scarborough, commented on the clustering of economic activity. While she admits that the concentration of expertise can lead to speedier technology development, she also pointed out the potential downside if certain regions are consistently left out or if they become too specialized. The wide-ranging report produced by economists working in the National Bureau of Economic Research mentioned a risk associated with regions specializing in one type of economic activity. Should a shock occur in that specific system, the regional economy could face serious challenges.
However, McElheran hopes that process innovation and leadership in AI would help to mitigate these risks, pointing to a finding in their study that founders motivated to bring new ideas into the world and contribute to their communities were largely correlated with AI use. This, she believes, could have a profound influence on the rate and direction of AI adoption. Nonetheless, she remarked that navigating the hiccups and missteps associated with implementing new technologies is a significant challenge – but one that could potentially lead to great returns.
Rise of AI and Mainstreaming
2023 has been the year where AI have become prominent. As generative AI becomes more powerful and widely available, regulators are scrambling to create laws to govern this new technology.
The UK has taken an active role, where the Competition and Markets Authority (CMA) has taken a step in the realm of artificial intelligence regulation by unveiling a comprehensive set of principles aimed at guiding the development and deployment of AI foundation models.
In the US, The White House has announced that eight more tech companies, including Adobe, IBM, Nvidia, Cohere, Palantir, Salesforce, Scale AI, and Stability AI, have pledged their commitment to the development of safe, secure, and trustworthy artificial intelligence (AI). This move builds upon the Biden-Harris Administration's efforts to manage AI risks and harness its benefits.
In July when the initiative was launched, Leading U.S. tech companies, including OpenAI, Google, Microsoft, Amazon, Anthropic, Inflection AI, and Meta, agreed to the voluntary safeguard. The initiative would encourage companies to work three categories: Safety, Security, and Trust, and apply to generative models that surpass the current industry frontier.