Late June 2025 - The Bimonthly AI Archive
AI in the Military (???), The Illusion of Thinking, and AI-powered layoffs: all the highlights from June 16-June 30, 2025
It’s been less than three years since ChatGPT was released, but AI has already captured billions in venture capital funding, upended white collar industries, and redefined the content we see online.
The rhetoric around AI has also shifted. Companies like OpenAI, which once called for the ethical development of AI and using AI to promote social good, are now racing to maximize profits.
Tech CEOs, busy pitching their products to investors and consumers, claim that artificial intelligence could surpass human intelligence by 2026. Companies have already begun trimming their U.S. workforces under the pretense that AI will be able to take over human jobs.
This series aims to document the major headlines, stances, and perspectives on AI from bimonthly snapshots in time. It’ll be an archive of sorts, with bold CEO declarations juxtaposed against the societal realities of an AI-powered world, and track the ever-shifting goalposts of what’s ethical, acceptable, and truly intelligent.
The Tech Industry
AI Supercharging the Military-Industrial Complex?
Tech CEOs have found their next big customer — the U.S. military. Earlier this month, OpenAI announced a $200 million contract with the U.S. Department of Defense for “identifying and prototyping areas where advanced AI could improve military and internal operations.” Ask Sage also nabbed $10 million for a genAI partnership with the U.S. Army, while European startup Helsing, which uses AI to inform military decisions, announced a €600 million funding round.
Top executives at Meta, Palantir, and OpenAI are also joining the Army’s new “Detachment 201” program. They’ll be serving as lieutenant colonels, serving 120 hours a year to advise the Department of Defense on its AI efforts and providing “direct input into military strategy while their companies compete for massive defense contracts.”
Will Microsoft Walk Away from OpenAI?
In the contract between OpenAI and Microsoft, there’s a tiny clause that says if OpenAI achieves AGI, it’ll limit Microsoft’s access to its future technologies. The catch? It’ll be up to OpenAI’s board to declare whether it’s achieved AGI, and there’s no widely agreed-upon definition or benchmark of AGI. The contract allegedly defines AGI as “a system capable of generating a certain level of profit,” and OpenAI CEO Sam Altman expects the startup to achieve AGI by 2028.
As a result, Microsoft is “reportedly pushing for the removal of the clause”—and even “considering walking away” from their $13 billion investment.
Meta’s Mad Scramble to Catch Up
After Meta’s newest AI model failed to outperform its competitors, the company has been scrambling to beef up its AI efforts by hiring externally. The company has reached out to dozens of researchers at OpenAI, handing out offers as high as $100 million. On June 12, Meta announced a $14.3 billion investment in Scale AI for a 49% stake in the startup. That’s approximately 10% of Meta’s 2024 revenue, and its second-largest deal (after its acquisition of WhatsApp). As part of the deal, Scale AI’s CEO and founder Alexandr Wang, along with a team of employees, will be joining Meta’s new Superintelligence Lab.
Scale AI, however, does not build AI models of its own but instead helps create the training data for AI models, hiring contractors by the hour to label the data. In late 2024, the startup was sued for wage theft and failing to protect the mental health of workers, who were exposed to graphic images, such as “rapes, assaults on children, murders, and fatal car accidents” in the data labeling process.
Research
Is ChatGPT Eroding Your Brain?
An MIT research study found that reliance on LLMs could negatively affect people’s cognitive function. They examined participants’ brain activity, finding that those who used LLMs to write an essay exhibited less brain activity than those who used a search engine or only their brains. Additionally, when participants switched from using LLMs to relying solely on their brains, they became significantly impaired without AI support.
The study also found that using LLMs as a writing aid can produce more “homogenous” responses, as participants who used AI wrote essays that “tended to converge on common words and ideas.” The homogeneity is almost expected, given that LLMs are trained to generate averages, and to predict the most probable sequence of words based on their training data.
A similar study by Santa Clara University tasked participants to come up with ideas, with one group using ChatGPT as an aid. While that group would initially present their own ideas, they’d gradually start picking ChatGPT’s ideas instead. Additionally, as the participants continued to converse with ChatGPT, the context windows would grow and reach capacity, causing ChatGPT to become “more likely to repeat or rehash material it has already produced.”
The Illusion of Thinking — Can “Reasoning Models” Actually Reason?
Earlier this month, Apple published a study called “The Illusion of Thinking,” which found that new large reasoning models (LRMs), such as OpenAI’s o1 and o3, “collapse when they're faced with increasingly complex problems.” AI companies typically evaluate their AI models on “established mathematical and coding benchmarks,” which LRMs have performed well on.
However, Apple’s study found that when LRMs are given high-complexity puzzles, they are completely inaccurate, unable to use explicit algorithms or consistent reasoning. And while they might outperform regular models on medium-complexity tasks, they tend to be beaten by regular models on these low-complexity tasks, as they often “overthink” and explore incorrect solutions.
Can We Make LLMs Search the Internet Instead?
AI models often use external knowledge sources to improve accuracy and reduce hallucinations, but most methods, such as retrieval-augmented generation (RAG), only involve searching through a static knowledge base instead of the Internet. While other methods, such as prompt-engineered agents, can search the web, the models are unable to “learn” to optimize their performance for Internet searches.
In this ByteDance-sponsored study, researchers propose a new framework for enabling AI models to learn how to best perform real-time searches on the Internet by text and image. The Internet-enabled AI models, which use reinforcement learning, outperform RAG models. These models are also better at determining when to search — they can more accurately “recognize the boundaries of their knowledge” and decide when their own “internal knowledge” is sufficient, leading to fewer external searches.
Politics and Society
AI Might Be Destroying Democracy
As of June 2025, Swiss scientists have documented over 200 elections in 50 countries that have been affected by AI. In some cases, candidates themselves are using AI “clones” to “translate [their] speeches and platforms into local dialects” and reach more voters. But AI is also being used to interfere with election outcomes—Russia, for instance, used AI-doctored photos and videos to catapult a far-right candidate to victory in the last Romanian presidential election, and the election even had to be re-run.
The World’s Widening Digital Divide
The digital divide between developed and developing countries is nothing new, but in April 2025, the UN warned that it’d be exacerbated by AI. Only 32 countries have data centers, and therefore, AI compute power. The U.S., China, and the EU have the majority of them.
The lack of compute power is leading to brain drain in developing countries, as top students and researchers have left developing countries for the U.S. and the EU. With the U.S.-China AI arms race, the world has also splintered into “nations that rely on China and those that depend on the United States.” The U.S. has given Middle Eastern nations American AI chips, provided that the nations avoid Chinese technology, and both Chinese and American companies are scrambling to build data centers in Southeast Asia.
Tesla Taxis in Austin
On June 22, Tesla launched Robotaxi, its service for self-driving taxis in Austin, inviting a small group of users to test out the service. CEO Elon Musk claimed that current Tesla owners could update their cars to become self-driving taxis and earn additional income.
In just a few days, videos surfaced on social media of robotaxis swerving into the wrong lane, braking sporadically, and dropping off passengers in the middle of intersections. The National Highway Traffic Safety Administration even opened an investigation into Tesla.
Why were Tesla’s autonomous vehicles struggling compared to competitors like Waymo? The short answer: Elon Musk. Waymos, and many other self-driving cars, rely on both cameras and lidar, a technique that bounces millions of lasers off of an object, to “[paint] a 3D picture of [the] vehicle’s surroundings.” Musk, however, believes relying solely on cameras. “Lidar is lame,” Musk said in 2019. “In cars, it’s friggin’ stupid. It’s expensive and unnecessary.” As of 2026, there have been over 700 crashes and 17 deaths involving Tesla’s self-driving feature.
Economy
AI-Powered Layoffs
Amazon CEO Andy Jassy told employees to expect fewer jobs at the company, claiming that “using AI extensively” would lead to “efficiency gains.” “We will need fewer people doing some of the jobs that are being done today,” he wrote in an internal memo.
Amazon isn’t alone in its sentiments—tech companies have collectively laid off tens of thousands of employees this year, with many citing AI. While some companies have downsized due to increased competition with AI—edtech platform Chegg, for instance, cut 22% of its workforce as students flocked to AI alternatives—others claim that AI can effectively replace workers.
Klarna claimed that AI could do the work of 700 customer service agents, reducing its workforce by 40%. Duolingo, in an attempt to become “AI first,” announced that it would begin replacing contractors with AI. IBM said AI agents replaced the work of hundreds of HR employees at the company. AI-induced layoffs have become such a common phenomenon that the state of New York is now requiring companies to disclose the role of AI in layoffs, becoming the first state to do so. Layoffs also occurred at Meta (3,600), Salesforce (1,000), Dell (2,500), and Intel (15,000), but all four companies are still hiring, cutting workers in certain departments to make room for increased AI efforts and larger AI teams.
It’s unclear whether these AI-powered layoffs will be effective—Klarna, for instance, has backpedaled and is now planning to hire more human customer service agents. “Investing in the quality of the human support is the way of the future for us,” the CEO said.
Environment
The Grid
U.S. energy companies are spending record amounts to build power plants and transmission lines. They are projected to spend over $200 billion in 2025 alone, mostly “to meet electricity demand from data [centers].” This could pose a financial burden to U.S. consumers, who have seen their energy bills rise 10% annually since 2020. For instance, New Jersey’s public utilities board warned residents that their electricity bills could increase by as much as 20% due to data center demand.
Additionally, the rise in AI could also contribute to grid instability, as data centers are being built at a faster rate than the power plants, transmission lines, and other infrastructure required to sustain the increase in energy demand. By 2030, data centers are estimated to consume double the amount of energy they consume today.
Public Health Concerns
Communities near data centers are concerned about potential health effects, as the increase in air pollution could cause “cancers, asthma, and other diseases.”
In Indianapolis, for instance, dozens of residents protested a proposal for a data center development, citing concerns about rising energy costs and potential pollution. And in Memphis, where an xAI data center is located, the Southern Environmental Law Center is threatening to sue xAI for excluding a key pollutant from its air quality tests.
Can AI Reduce Emissions?
In a new study, researchers estimate that AI, if used to improve transportation, energy, and food production, could actually reduce carbon emissions by 5.4 billion metric tons. For instance, AI could be used to predict supply and demand fluctuations in the grid, identify new proteins to replace emissions-heavy meat and dairy, and lower the cost of electric vehicles. While this study’s scope is limited, it supports that AI, with regulations and investment, could more than offset the emissions it creates.