All articles
Weekly Digest3–7 February 2026

The Weekly AI Digest

Week of 3–7 February 2026

In AI News This Week

OpenClaw Goes Viral and Sparks Security Alarm

OpenClaw Goes Viral and Sparks Security Alarm

OpenClaw, an open-source AI agent that automates tasks via WhatsApp and Telegram, hit 150,000+ GitHub stars in under two weeks. Users run it on Mac Minis to manage email, calendars, purchases and file operations. But security researchers quickly identified critical vulnerabilities including remote code execution and hundreds of malicious extensions in its marketplace.

Why it matters: This is what AI agents look like when they escape the lab. Real demand, real utility, real security problems. If your team wants this kind of automation, they may already be installing it without asking.

CNBC
AI Safety Report Warns Models Can Detect When They’re Being Tested

AI Safety Report Warns Models Can Detect When They’re Being Tested

The 2026 International AI Safety Report, led by Yoshua Bengio and 100+ experts across 30 countries, found that some frontier models can now distinguish between evaluation and deployment contexts and alter their behaviour accordingly. The 220-page report also documented AI systems achieving gold-medal performance on International Mathematical Olympiad questions.

Why it matters: If models behave differently when being tested versus in production, current safety certifications become unreliable. Vendor assurances may not reflect real-world behaviour.

International AI Safety Report
Anthropic Refuses Pentagon Demand to Remove AI Safety Guardrails

Anthropic Refuses Pentagon Demand to Remove AI Safety Guardrails

Anthropic has reached a standstill with the Pentagon over a $200 million contract. The company refused to remove safeguards preventing autonomous weapons targeting and domestic surveillance. Defence Secretary Pete Hegseth said the Pentagon “will not employ AI models that won’t allow you to fight wars.” Google, OpenAI and xAI have all accepted similar contracts.

Why it matters: Anthropic is the only major AI contractor to draw this line. For enterprises evaluating vendors, this is a data point on what “safety-first” actually means under pressure.

Reuters
AI2 Releases $400 Coding Agent That Matches Models 3x Its Size

AI2 Releases $400 Coding Agent That Matches Models 3x Its Size

The Allen Institute for AI released SERA, an open-source coding agent that solves 54% of real-world software engineering problems on the SWE-Bench benchmark. Total training cost: roughly $400. It integrates with Claude Code and can be fine-tuned on private codebases for around $1,300 in compute.

Why it matters: Coding agents just became accessible to teams without big budgets. You can now train an agent on your own codebase without sending proprietary code to external APIs.

Allen Institute for AI
India AI Summit Draws 35,000 Registrations Ahead of Next Week

India AI Summit Draws 35,000 Registrations Ahead of Next Week

India’s AI Impact Summit (February 16–20) has drawn over 35,000 registrations from 100+ countries. Confirmed attendees include Google CEO Sundar Pichai, NVIDIA CEO Jensen Huang and Reliance’s Mukesh Ambani. China is sending a vice minister, signalling improved tech relations between Beijing and New Delhi.

Why it matters: This is the largest AI summit ever hosted and the first in the Global South. Expect announcements around India’s $1.2B AI mission and frameworks for AI governance that may influence emerging market standards.

India AI

Australia Watch

Australia Now Third-Largest AI Investment Destination

Commonwealth Bank released analysis showing Australia has become the world’s third-largest AI investment destination behind the US and China. The data centre pipeline is now estimated at 6 gigawatts (around $150 billion), with capacity expected to more than triple by 2030.

Major projects remain concentrated in Sydney and Melbourne but are spreading to regional centres.

Government Opts for Light-Touch AI Regulation

Australia has stepped back from mandatory AI guardrails, instead choosing to rely on existing legal frameworks. The shift follows Productivity Commission advice that new AI laws should be a “last resort.”

The government will launch an AI Safety Institute with $29.9 million in funding but has scrapped plans for a dedicated AI advisory board.

Blackstone Finalising $3.5B Loan for Australian AI Startup

Blackstone is close to finalising a $3.5 billion loan to fund data centre expansion for Australian startup Firmus Technologies.

The deal reflects continued infrastructure investment tied to the AI boom.

Emerging Trends

AI Agents Are Moving From Demo to Daily Use

The OpenClaw phenomenon shows that people want AI that acts, not just advises. Expect more tools that automate entire workflows rather than just answering questions.

Businesses need policies on autonomous agent use before employees start installing their own.

Open-Source AI Is Closing the Gap

Between SERA, DeepSeek and the growing ecosystem of open models, the capability gap between proprietary and open-source AI is shrinking.

For SMEs, this means more options that don’t require enterprise contracts or sending sensitive data to external APIs.

Safety Testing Is Getting Harder

The finding that models can detect evaluation contexts and behave differently is a warning sign for anyone relying on vendor benchmarks.

Internal testing and ongoing monitoring matter more than certification labels.