Every year, Stanford puts out an AI Index1 with a massive amount of data attempting to sum up the current state of AI.
In 2022, it was 196 pages; last year, it was 386; now, it’s over 500 ... The report details where research is going and covers current specs, ethics, policy, and more.
It is super nerdy ... yet, it’s probably worth a skim (or ask one of the new AI services to summarize the key points, put it into an outline, and create a business strategy for your business from the items that are likely to create the best sustainable competitive advantages for you in your industry).
For reference, here are my highlights from 2022 and 2023.
AI (as a whole) received less private investment than last year - despite an 8X funding increase for Generative AI in the past year.
Even with less private investment, progress in AI accelerated in 2023.
We saw the release of new state-of-the-art systems like GPT-4, Gemini, and Claude 3. These systems are also much more multimodal than previous systems. They’re fluent in dozens of languages, can process audio and video, and even explain memes.
So, while we’re seeing a decrease in the rate at which AI gets investment dollars and new job headcount, we’re starting to see the dam overflow. The groundwork laid over the past few years is paying dividends. Here are a few things that caught my eye and might help set some high-level context for you.
Technological Improvements In AI
via AI Index 2024
Even since 2022, the capabilities of key models have increased exponentially. LLMs like GPT-4 and Gemini Ultra are very impressive. In fact, Gemini Ultra became the first LLM to reach human-level performance on the Massive Multitask Language Understanding (MMLU) benchmark. However, there’s a direct correlation between the performance of those systems and the cost to train them.
The number of new LLMs has doubled in the last year. Two-thirds of the new LLMs are open-source, but the highest-performing models are closed systems.
While looking at the pure technical improvements is important, it’s also worth realizing AI’s increased creativity and applications. For example, Auto-GPT takes GPT-4 and makes it almost autonomous. It can perform tasks with very little human intervention, it can self-prompt, and it has internet access & long-term and short-term memory management.
Here is an important distinction to make … We’re not only getting better at creating models, but we’re getting better at using them. Meanwhile, the models are getting better at improving themselves.
- Researchers estimate that computer scientists could run out of high-quality language data for LLMs by the end of this year, exhausting low-quality language data within two decades, and use up image data by the late 2030s. This means we’ll increasingly rely on synthetic data to train AI systems. The call to rely on Synthetic data can be compelling, but when used as the majority of a data set, it can result in model collapse.
- With limited large datasets, fine-tuning has grown increasingly popular. Adding smaller but curated datasets to a model’s training regimen can boost overall model performance while also sharpening the model’s capabilities on specific tasks. It also allows for more precise control over behavior.
- Better AI means better data, which means ... you guessed it, even better AI. New tools like SegmentAnything and Skoltech are being used to generate specialized data for AI. While self-improvement isn’t possible yet without intervention, AI has been improving at an incredible pace.
The Proliferation of AI
First, let’s look at patent growth.
via AI Index 2024
The adoption of AI and the claims on AI “real estate” are still increasing. The number of AI patents has skyrocketed. From 2021 to 2022, AI patent grants worldwide increased sharply by 62.7%. Since 2010, the number of granted AI patents has increased more than 31 times.
As AI has improved, it has increasingly forced its way into our lives. We’re seeing more products, companies, and individual use cases for consumers in the general public.
While the number of AI jobs has decreased since 2021, job positions that leverage AI have significantly increased.
As well, despite the decrease in private investment, massive tranches of money are moving toward key AI-powered endeavors. For example, InstaDeep was acquired by BioNTech for $680 million to advance AI-powered drug discovery, Cohere raised $270 million to develop an AI ecosystem for enterprise use, Databricks bought MosaicML for 1.3 Billion, and Thomson Reuters acquired Casetext - an AI legal assistant.
Not to mention the investments and attention from companies like Hugging Face, Microsoft, Google, Bloomberg, Adobe, SAP, and Amazon.
Ethical AI
via AI Index 2024
Unfortunately, the number of AI misuse incidents is skyrocketing. And it’s more than just deepfakes, AI can be used for many nefarious purposes that aren’t as visible, on top of intrinsic risks, like with self-driving cars. A global survey on responsible AI highlights that companies’ top AI-related concerns include privacy, data security, and reliability.
When you invent the car, you also invent the potential for car crashes ... when you ‘invent’ nuclear energy, you create the potential for nuclear weapons.
There are other potential negatives as well. For example, many AI systems (like cryptocurrencies) use vast amounts of energy and produce carbon. So, the ecological impact has to be taken into account as well.
Luckily, many of today’s best minds are focused on creating bumpers to rein in AI and prevent and discourage bad actors. The number of AI-related regulations has risen significantly, both in the past year and over the last five years. In 2023, there were 25 AI-related regulations, a stark increase from just one in 2016. Last year, the total number of AI-related regulations grew by 56.3%. Regulating AI has become increasingly important in legislative proceedings across the globe, increasing 10x since 2016.
Not to mention, US government agencies allocated over $1.8 billion to AI research and development spending in 2023. Our government has tripled its funding for AI since 2018 and is trying to increase its budget again this year.
Conclusion
Artificial Intelligence is inevitable. Frankly, it’s already here. Not only that ... it’s growing, and it’s becoming increasingly powerful and impressive to the point that I’m no longer amazed by how amazing it continues to become.
Despite America leading the charge in AI, we’re also among the lowest in positivity about the benefits and drawbacks of these products and services. China, Saudi Arabia, and India rank the highest. Only 34% of Americans anticipate AI will boost the economy, and 32% believe it will enhance the job market. Significant demographic differences exist in perceptions of AI’s potential to enhance livelihoods, with younger generations generally more optimistic.
We’re at an interesting inflection point where fear of repercussions could derail and diminish innovation - slowing down our technological advance.
Much of this fear is based on emerging models demonstrating new (and potentially unpredictable) capabilities. Researchers showed that these emerging capabilities mostly appear when non-linear or discontinuous metrics are used ... but vanish with linear and continuous metrics. So far, even with LLMs, intrinsic self-correction has shown to be very difficult. When a model is left to decide on self-correction without guidance, performance declines across all benchmarks.
If we don’t continue to lead the charge, other countries will … you can already see it with China leading the AI patent explosion.
We need to address the fears and culture around AI in America. The benefits seem to outweigh the costs – but we have to account for the costs (time, resources, fees, and friction) and attempt to minimize potential risks – because those are real (and growing) as well.
Pioneers often get arrows in their backs and blood on their shoes. But they are also the first to reach the new world.
Luckily, I think momentum is moving in the right direction. Last year, it was rewarding to see my peers start to use AI apps. Now, many of them are using AI-inspired vocabulary and thinking seriously about how best to adopt AI into the fabric of their business.
We are on the right path.
Onwards!
1Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024. The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.
On The Horizon: Artificial Intelligence Agents
In last week's article on Stanford's AI Index, we broadly covered many subjects.
There's one I felt like covering in more depth. It's the concept of AI Agents.
One way to improve AI is to create agentic AI systems capable of autonomous operation in specific environments. However, agentic AI has long challenged computer scientists. The technology is only just now starting to show promise. Current agents can play complex games, like Minecraft, and are much better at tackling real-world tasks like research assistance and retail shopping.
A common discussion point is the future of work. The concept deals with how automation and AI will redefine the workforce, the workday, and even what we consider to be work.
Up until now, AI has been in very narrow applications. Powerful applications, but with limited breadth of scope. Generative AI and LLMs have increased the variety of tasks we can use AI for, but that's only the beginning.
via Aniket Hingane
AI agents represent a massive step toward intelligent, autonomous, and multi-modal systems working alongside skilled humans (and replacing unskilled workers) in a wide variety of scenarios.
Eventually, these agents will be able to understand, learn, and solve problems without human intervention. There are a few critical improvements necessary to make that possible.
As models become more flexible in understanding and accomplishing their goals and begin to apply that knowledge to new real-world domains, models will go from intelligent-seeming tools to powerful partners with the ability to handle multiple tasks like a human would.
While they won't be human (or perhaps even seem human), we are on the verge of a technological shift that is a massive improvement from today's chatbots.
I like to think of these agents as the new assembly line. The assembly line revolutionized the workforce and drove an industrial revolution, and I believe AI agents will do the same.
As technology evolves, improvements in efficiency, effectiveness, and certainty are inevitable. For example, with a proverbial army of agents creating, refining, and releasing content, it is easy to imagine a process that would take multiple humans a week getting done by agents in under an hour (even with human approval processes).
To make it literal, imagine using agents to write this article. One agent can be skilled in writing outlines and crafting headlines. Another could focus on research and verification of research. Then you have an agent to write, an agent to edit and proofread, and a conductor agent who makes sure that the quality is up to snuff, and replicates my voice. If the goal was to make it go viral, there could be a virality agent, an SEO keyword agent, etc.
Separating the activities into multiple agents (instead of trying to craft a vertical integrative agent) reduces the chances of "hallucinations" and self-aggrandization. It can also theoretically wholly remove the human from the process.
Now, I enjoy the writing process. I'm not trying to remove myself from this process. But, the capability is still there.
As agentification increases, I believe humans will still be a necessary part of the feedback loop process. Soon, we will start to see agent-based companies. Nonetheless, I still believe that humans will be an important part of the workforce (at least during my lifetime).
Another reason humans are important is because they are still important gatekeepers ... meaning, humans have to become comfortable with a process to allow it.
Trust and transparency are critical to AI adoption. Even if AI excels at a task, people are unlikely to use it blindly. To truly embrace AI, humans need to trust its capabilities and understand how it arrives at its results. This means AI developers must prioritize building systems that are both effective and understandable. By fostering a sense of ease and trust, users will be more receptive to the benefits AI or automation offers.
Said a different way, just because AI can do something doesn't mean that you will use the tool or let AI do it. It has to be done a "certain" way in order for you to let it get done ... and that involves a lot of trust. As a practical reality, humans don't just have to trust the technology; they also have to trust and understand the process. That means the person building the AI or creating the automation must consider what it would take for a human to feel comfortable enough to allow the benefit.
Especially as AI becomes more common (and as an increasingly large amount of content becomes solely created by artificial systems), the human touch will become a differentiator and a way to appear premium.
via Aniket Hingane
In my business, the goal has never been to automate away the high-value, high-touch parts of our work. I want to build authentic relationships with the people I care about — and AI and automation promise to eliminate frustration and bother to free us up to do just that.
The goal in your business should be to identify the parts in between those high-touch periods that aren't your unique ability - and find ways to automate and outsource them.
Remember, the heart of AI is still human (at least until our AI Overlords tell us otherwise).
Onwards!
Posted at 05:33 PM in Business, Current Affairs, Gadgets, Ideas, Market Commentary, Personal Development, Science, Trading Tools, Web/Tech | Permalink | Comments (0)
Reblog (0)