🌁#88: Can DeepSeek Inspire Global Collaboration?

Community Article Published February 17, 2025

we highlight the overlooked impact of DeepSeek’s surprise release on countries like Korea, Japan, and those in Europe and what it might mean for the global open-source community

--

This Week in Turing Post:

  • Wednesday, AI 101, Model: What is Mixture-of-Mamba?
  • Friday, Agentic Workflow: Reasoning and Planning: tech deep dive

🔳 Turing Post is on 🤗 Hugging Face as a resident -> click to follow!


The shockwaves that followed the emergence of DeepSeek-R1, and the geopolitical tensions

A month ago, in January 2025, a promising Chinese AI startup, DeepSeek, released its inference model, DeepSeek-R1, causing a huge shock to the global AI industry. With two axes: ‘open source strategy’ and ‘efficient utilization of computing resources,’ DeepSeek provided this model with performance comparable to OpenAI’s o1 at about one-tenth the price, creating a new competitive landscape between closed and open models, and between the US and China. Throughout this time, we’ve learned a lot about the reactions from developers, users, and governments in these countries. Today, we want to bridge the gap and provide an overview of how key players, including European countries and Asian markets like Japan and Korea, are responding to DeepSeek’s move.

Responses of major countries: The strange coexistence of ‘AI nationalism’ and ‘open source AI’

After the DeepSeek incident, major countries around the world are responding with their own strategies.

The United States immediately began strengthening its own AI capabilities. President Trump announced the $500 billion ‘Stargate’ AI infrastructure plan right after taking office, declaring that he would make the United States the “world’s AI capital.” At the same time, he is withdrawing existing executive orders and further strengthening export controls on AI technology to China. Some in the U.S. Congress are even taking a hard-line stance that “Chinese AI should be completely blocked,” and the tone of ‘exclusive AI nationalism’ seems to be becoming more evident.

Europe, which has traditionally been at the forefront of ‘AI ethics and regulation,’ seems to be seeking a ‘deregulation’ and ‘openness’ path due to concerns that excessive regulation will lead to a decline in Europe’s AI competitiveness following the DeepSeek incident. At the recent Paris AI Summit, some EU leaders, including French President Macron, publicly supported a plan to grant flexibility to the new AI law in order to foster their own startups.

Asian countries such as Korea and Japan seem to be emphasizing ‘protection of their citizens’ and ‘technological sovereignty.’

The Korean government blocked key government officials from accessing DeepSeek immediately after the launch of DeepSeek, a preemptive response to growing concerns that sensitive information could be leaked due to Chinese AI models. Among private companies, major groups such as Hyundai Motor Company and Hanwha Group have banned the use of DeepSeek within their companies, and some are actively developing their own Korean AI platforms.

Japan has also begun reorganizing its AI strategy. As the impact of DeepSeek grew, the Japanese government announced that it would focus on security and ethics issues while establishing a basic plan for AI development and utilization. In addition, it began reviewing countermeasures that combine AI industry promotion policies with risk management, as well as energy policies to prepare for the surge in electricity demand in the AI era.

While many countries are concerned about ‘protecting and fostering their own technologies’, they have one thing in common: they loudly advocate strengthening their own industries and ecosystems using ‘open source AI’.

I think the original core value of open source is ‘acceleration of innovation through collaboration without boundaries’. As has already been proven in the software industry, technological advancements are made by leaps and bounds when researchers and developers from around the world participate in an open environment.

The AI field is no exception. Dr. Yann LeCun, chief scientist at Meta, also pointed out that DeepSeek’s success “doesn’t mean that China has surpassed the United States, but rather that the open source model is surpassing the closed model.”

However, can the current tense atmosphere, which can be called ‘AI nationalism,’ and ‘the development of the AI ecosystem through open source AI’ coexist?

Global cooperation toward open source AI must be expanded even if it is difficult and challenging

In the dichotomous perspective of ‘open source vs. closed’ and ‘cooperation vs. self-reliance’, each country is faced with the dual task of developing its own technological capabilities while also participating in the formation of international norms. What we must clearly remember is that a nationalistic approach of confining AI capabilities within the country or investing only in strengthening its own AI capabilities will not only slow down the AI innovation we desire, but also block the virtuous cycle of collaboration.

It has already been proven historically that faster and safer development is possible when talented people from all over the world gather their collective intelligence rather than when one organization or one country works in isolation.

In addition, openness and collaboration are not optional but essential for most countries except for the US and China.

image/png Size of AI startup ecosystem by country vs. Status of AI R&D collaboration between countries. Image Credit: Turing Post

If we compare the ‘scale of AI startup ecosystems’ in major countries around the world with the ‘level of AI research and development collaboration between countries’, we can see that countries such as the US and China have their own AI startup ecosystems of such a large scale that they can reap the benefits of ‘open source AI’ on their own.

However, even in countries like Korea and Japan in Asia that are already making large-scale investments in the growth of the AI industry, individual AI startup ecosystems are limited in scale, and therefore, they cannot lead to innovation in domestic AI technology and growth of the industry with the catchphrase of ‘strengthening their own open source AI ecosystem’ and ‘nationalistic investments to independently develop AI capabilities and industries.

The reason France supports companies like Mistral while promoting itself as an ‘open source AI hub’ is because it realized that Europe must be included in a global cooperative network to become competitive. Ultimately, innovation is achieved in an open environment where collaboration is global, and security is also achieved through international cooperation. In an era where services developed by American AI startups today can be used by people in countries on the other side of the globe tomorrow, if we do not boldly abandon our AI nationalist perspectives and think about how to expand collaboration and cooperation with other countries and compete healthily, we will eventually become isolated and left behind. Korea, where I live, Japan, and countless other countries face the dual challenge of protecting AI sovereignty in terms of security and industrial competitiveness while at the same time not falling behind in an open ecosystem. However, we must not forget: knowledge shared across borders is the fuel for AI technology and industrial development. Looking back, deep learning research led by scholars from Canada and the UK also blossomed when capital from Silicon Valley in the US and data resources from various countries were combined. Without such global collaboration and value chains, would today’s AI innovation have been possible? Core tools such as Python, PyTorch, and TensorFlow, which are products of the open source movement, were created and widely used by developers around the world, not in a specific country, and have greatly accelerated the development of AI. If each country had controlled these knowledge and tools in a closed manner and had made it difficult for numerous developers around the world to collaborate, the speed of AI development would have been much slower than it is now, and the results would have been enjoyed only by some countries or regions.

Even amidst the current conflicts and concerns triggered by the DeepSeek incident and the competition for technological hegemony, we must not overlook the big picture of ‘cooperation for the common development of humanity.’ Now is the time when wisdom is needed to harmonize ‘global cooperation’ and ‘national interests.’

Today’s editorial is brought to you by Ben Eum, the Editor of Turing Post Korea

Curated Collections

image/png

We are reading/watching:

News from The Usual Suspects ©

Anthropic is All Over the News

  • New Hybrid Model Anthropic returns with a new hybrid model, offering a sliding scale of performance to balance cost and capability. Unlike OpenAI’s blunt “low-medium-high” options, this model adapts with precision – ideal for enterprises navigating complex workloads. Early results show it outperforms OpenAI’s best in practical coding tasks →The Information
  • Anthropic Economic Index tracks AI's impact on labor markets using data from millions of anonymized conversations on Claude.ai. The initial report reveals AI’s strong presence in software development and technical writing, with tasks leaning toward augmentation over automation. Mid-to-high-wage occupations show the most AI adoption. The dataset is open-source, inviting researchers to analyze trends and guide policy decisions for the evolving AI-driven economy →their blog plus their paper on the same topic Which Economic Tasks are Performed with AI?
  • Snowflake + Anthropic: AI with a Data-Driven Edge Snowflake teams up with Anthropic to embed Claude 3.5 Sonnet into Cortex Agents, bringing natural language interactions to enterprise data analysis. Early tests show strong performance in text-to-SQL tasks – a promising step for AI-driven decision-making. Mike Krieger, Chief Product Officer, Anthropic

“Anthropic was started very much from a place of, ‘how do we deploy AI safely and responsibly?’ And there's sort of an initial question of, ‘well, is that going to slow down your progress? Is that going to make your model less attractive?’ But, in fact, we find the opposite, having a model that has the right safeguards in place, is hard to jailbreak, and has been trained responsibly, is actually a net-plus in that it is actually enhancing the trust at the deployment side. The belief in tying data and AI intelligence together to actually deliver customer value, and also doing it in a safe and responsible way. I think that's why the partnership has been so effective.”

OpenAI is keeping up

image/png

  • OpenAI also updated their Model Spec →their blog
  • And shared Reasoning best practices and explained their models a bit: Models like o1 and o3-mini excel at complex reasoning, decision-making, and multi-step planning, ideal for tasks in law, finance, and engineering. GPT models are faster and more cost-efficient for simpler tasks. Successful o-series use cases include handling ambiguous information, finding key details in large datasets, and advanced code reviews. Effective prompts are clear and direct, with minimal need for step-by-step guidance or few-shot examples →their blog

Galileo Labs ranks AI agents

  • Galileo Labs launches a new Agent Leaderboard on Hugging Face, benchmarking how well LLMs handle real-world tasks. A useful reference for developers choosing models for agentic applications →Leaderboard on HF

Models to pay attention to:

  • LM2: Large Memory Models introduce LM2, a Transformer architecture with a memory module to improve long-context reasoning, outperforming RMT by 37.1% and excelling in multi-hop inference →read the paper
  • NatureLM: Deciphering the Language of Nature for Scientific Discovery train NatureLM across scientific domains, enhancing tasks like SMILES-to-IUPAC translation and CRISPR RNA design for cross-domain applications →read the paper
  • Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving advance formal proof generation with Goedel-Prover, achieving 57.6% Pass@32 on miniF2F using expert iteration and statement formalizers →read the paper

The freshest research papers, categorized for your convenience

There were quite a few TOP research papers this week, we will mark them with 🌟 in each section.

LLM Architectures, Training, and Optimization

Reasoning and Cognitive Capabilities

Reinforcement Learning and Adaptive Behavior

Agent Development and Interaction

Datasets and Data Generation

That’s all for today. Thank you for reading!


Please share this article to your colleagues if it can help them enhance their understanding of AI and stay ahead of the curve.

image/png

Community

📻 🎙️ Hey, I made a podcast about this blog post, check it out!

This podcast is generated via ngxson/kokoro-podcast-generator, using DeepSeek-R1 and Kokoro-TTS

Sign up or log in to comment