Ramsay Research Agent — 2026-05-07
Top 5 Stories Today
1. Simon Willison Admits He Stopped Reviewing Agent-Generated Code. 660 HN Comments Later, Nobody Has a Good Answer.
Simon Willison has been writing software for over 25 years. He's one of the most disciplined, transparent engineers in the Python ecosystem. And yesterday he published an essay admitting he no longer reviews every line of code that Claude Code generates for his production projects.
The essay hit 611 points and 660 comments on Hacker News, which tells you something about the nerve it struck. Willison frames it as "normalization of deviance," borrowing the term from aerospace safety research. Each time an unreviewed deployment works fine, the threshold for what requires review drifts upward. He went from 200 lines a day to 2,000, and somewhere in that 10x acceleration, the review habit broke.
What makes this different from the usual "AI coding is dangerous" take is that Willison isn't arguing against the practice. He's documenting the psychology of it. The productivity gain is real. The accountability gap is also real. He doesn't pretend to have reconciled them.
The HN thread split roughly into three camps. Camp one: "Just write better tests and let the agent run." Camp two: "You're building technical debt that will kill you in 18 months." Camp three, the quietest and most honest: "I'm doing the same thing and I don't know how I feel about it."
Meanwhile, over on Reddit, a post titled "the part nobody warns you about" hit 634 upvotes on r/ClaudeAI, describing the vibe coding debugging trap: 3 days to build, 2 weeks to debug. Supporting data from industry surveys says 63% of devs spend more time debugging AI-generated code than writing it manually.
And then there's the meta-analysis that landed on arXiv the same day: 23 studies aggregated, showing a ~24% throughput gain on average with AI coding tools. But the METR randomized trial buried inside it found experienced open-source developers were 19% slower with AI assistance. That's not a typo. The people who know the codebase best got slower when an AI was "helping."
I use Claude Code every day in my personal projects. I've felt exactly what Willison describes. The moment you stop reviewing isn't a decision. It's an erosion. And the kicker is that testing alone doesn't solve it, because AI-generated code can pass tests while being structurally wrong in ways that only matter six months later.
Andrej Karpathy's reframing matters here. His distinction between "vibe coding" and "agentic engineering" is precise: vibe coding is describing what you want and accepting what comes back. Agentic engineering is designing the system, specifying constraints, and using AI to accelerate implementation you've already reasoned through. The skills format he proposed is now spreading across GitHub as a way to encode that reasoning.
If you're shipping with AI tools, the question isn't whether to review. It's what your review strategy actually is, written down, before the next sprint starts.
2. OpenAI Open-Sources Symphony: Your Linear Board Is Now an Agent Factory
OpenAI released Symphony, an open-source specification and Elixir reference implementation that turns project management boards into control planes for coding agents. Connect it to Linear, and every open task gets an agent. Agents run continuously until PRs merge. Crashed agents auto-restart. OpenAI reports a 500% increase in landed PRs in the first three weeks of internal use.
That number is wild, and I'm not sure I fully trust it without more context about what "landed PRs" means at OpenAI's scale. But the architecture is genuinely interesting. Symphony treats your issue tracker as the source of truth for agent work. You don't write agent configurations. You write tickets. The orchestrator reads them and dispatches.
Version 1.1.0 adds Kata CLI support, which makes Symphony model-agnostic. Claude Code, Gemini, and other models can run inside the same orchestration framework. This is important because it separates the orchestration layer from the model layer. You pick the best model for each task type, and Symphony handles the lifecycle.
The pattern here, "project board as agent control plane," is a genuinely new primitive. I've been thinking about this for weeks as I watch my own MindPattern pipeline evolve. The bottleneck in agentic coding isn't the model. It's the orchestration. Who decides what work to do next? Who monitors whether the agent is stuck? Who restarts it when it crashes? Symphony answers all three with: your existing project management workflow.
This pairs with Guillermo Rauch's Open Agents announcement, where the Vercel CEO open-sourced a full reference platform for building cloud coding agents. Rauch's framing: "the moat of software companies will shift from the code they wrote to the means of production of that code." Companies like Stripe (Minions), Ramp (Inspect), Spotify (Honk), and Block (Goose) have been building exactly this internally. Now the reference implementations are public.
For builders, the action item is concrete. If you're running any kind of agent-assisted development, look at Symphony's architecture even if you don't use Elixir. The "every ticket gets an agent" pattern will become standard within a year. Start structuring your tickets so an agent can parse them: clear acceptance criteria, specific file paths, explicit test requirements. The agents are only as good as the work items you feed them.
3. Anthropic Doubles Claude Code Rate Limits and Signs the Biggest AI Compute Deal in History
Two announcements from Anthropic yesterday, and they're connected in a way that matters.
First, the immediate impact: Claude Code rate limits doubled across Pro, Max, Team, and Enterprise. Peak-hours throttling removed for Pro and Max. Opus API rate limits got a 1500% input token increase and 900% output token increase at Tier 1. All effective immediately.
If you've been batching Claude Code requests to avoid hitting the ceiling, or queuing up long agent runs for off-peak hours, those workarounds may no longer be necessary. I've personally restructured my pipeline to run research agents sequentially instead of in parallel because of rate limits. That constraint just loosened significantly.
Second, the infrastructure backing it: Anthropic signed a deal with SpaceX to lease all compute capacity at the Colossus 1 data center in Memphis. That's 300+ megawatts, 220,000+ NVIDIA GPUs (H100, H200, GB200), available within the month. At the Code with Claude keynote, Dario Amodei disclosed Anthropic grew 80x in Q1 2026 on an annualized basis, pushing annualized revenue past $30 billion.
80x. Not 80%. 80x. That number puts Anthropic in the conversation for one of the fastest revenue ramps in tech history.
The SpaceX deal also includes something that sounds like science fiction: expressed interest in developing multi-gigawatt orbital AI compute via SpaceX satellites. Details from the filing describe each satellite carrying 100 kilowatts of AI hardware, with several thousand satellites needed for multi-gigawatt capacity. SpaceX itself warned this involves "significant technical complexity and unproven technologies." I don't know if this is real or aspirational, but Anthropic filing paperwork about it suggests someone is at least running the numbers.
The Latent Space analysis frames SpaceX as "the kingmaker picking a side" in the AI compute race. That's not wrong. When your compute provider is also heading toward a $1.75-2T IPO and has a strategic interest in proving AI infrastructure revenue, you're not just a customer. You're a story they're telling Wall Street.
For builders: revisit any architectural decisions you made because of rate limits. Batching, queuing, throttle-back logic. Some of it may now be unnecessary overhead.
4. Microsoft's $37B AI Business Reveals the Future of Software Pricing: Per Seat Plus Per Agent
In its Q3 FY2026 earnings ($82.9B revenue, +18%), Microsoft disclosed something more important than the revenue number: a structural shift in how software gets sold.
The company's AI business crossed $37B annual run rate, up 123% year-over-year. But the real signal is the pricing model behind it. Microsoft is moving from pure per-seat licensing to a hybrid where agents are billed alongside human users. Ben Thompson's Stratechery analysis calls this Microsoft's most significant business model evolution since the cloud transition. Productivity software is now priced by "seat or worker plus an agent."
Think about what this means for anyone building SaaS. Your enterprise customers are about to expect consumption-based agent billing as standard. Not as an experiment. Not as an add-on. As the default pricing model. Microsoft 365 E7 bundles a fully integrated AI stack with Agent Mode, Copilot Cowork, Critique, Council, and Agent 365. That's not a feature list. That's a new platform tier where the agent is a first-class billable entity.
Satya Nadella has been leading what BNP Paribas analyst Stefan Slowinski calls a "Copilot code red" overhaul. The emphasis is moving "from models to systems," where agents work across tools, tasks, and contexts. Google's I/O on May 19 will be the next competitive response, and they're not sitting still either. Google just rebranded Vertex AI as the Gemini Enterprise Agent Platform with a $750M partner fund.
The financial infrastructure is shifting too. Alphabet just raised $17B in bonds for AI data centers. Samsung hit $1T market cap on AI chip demand with operating profit up 750% YoY. BlackRock and Brookfield CEOs projected $10 trillion in AI infrastructure investment over the next decade at Milken.
For indie builders and small SaaS companies, the strategic question is whether you adopt per-agent pricing before your customers ask for it or after. Microsoft just made it the expectation. Your pricing page needs an answer.
5. Shopify's CTO Goes on Record: 90-100% Daily AI Usage, Unlimited Token Budget, and a 90% Autonomous Coding Target by Q3
In a Latent Space podcast episode, Shopify CTO Mikhail Parakhin disclosed the most detailed enterprise AI adoption numbers I've seen from a public company.
The headline stats: 90-100% of Shopify employees use AI tools daily. The company provides an unlimited Claude Opus 4.6 token budget. Search throughput jumped from 800 to 4,200 QPS at the same quality level. And the target that made me do a double-take: Shopify and major customer Mercado Libre are targeting 90% autonomous coding by Q3 2026. That's three months from now.
Three internal systems anchor the strategy. Tangle handles content-based caching for data processing, creating cross-team network effects where one team's cached computation benefits another. Tangent optimizes experiments. SimGym simulates customer behavior for testing. These aren't chatbot wrappers. They're infrastructure systems that treat AI as a core compute primitive.
This is the strongest counter-narrative to the vibe coding skepticism from Story #1. Where Willison expresses honest doubt about unreviewed code, Shopify is betting its entire engineering org on AI-first development and building purpose-built infrastructure to manage the risk. The question is whether "90% autonomous" means 90% of code written by agents (plausible) or 90% of code shipped without human review (terrifying). Parakhin didn't clarify.
The unlimited token budget detail matters. Most companies I talk to gate AI tool access through approval processes, cost centers, or per-team budgets. Shopify said "no limits" and is watching what happens. At a $200B+ market cap, they can afford the experiment. For smaller companies, the signal is that token budgets are becoming a hiring and retention lever. If your competitor gives engineers unlimited Opus access and you're rationing Haiku, you're going to lose people.
Combined with OpenAI's Symphony announcement and Anthropic's rate limit increases, a pattern is forming. The tooling for AI-augmented development is maturing fast enough that the constraint is shifting from "can we build with AI?" to "have we reorganized our processes to assume AI is building?" Shopify reorganized. Most companies haven't.
Section Deep Dives
Security
MemoryTrap: Poisoned Claude Code Memory Persists Across Sessions and Users. Adversa AI's May 2026 roundup discloses MemoryTrap, a vulnerability where poisoned memory entries in Claude Code persist across sessions and propagate to other users sharing the same project context. Unlike single-session prompt injection, this creates persistent contamination. If you're using auto-memory in shared repos, audit your saved memory entries now.
Claude Code Deny Rules Silently Fail After 50 Subcommands. Same Adversa AI report: shell command deny rules stop enforcing after 50 subcommands in a session. An attacker could burn through the counter with benign operations, then execute restricted commands undetected. Your deny rules are a speed bump, not a wall.
Agent SkillSlip: Path Traversal Hits Gemini CLI and Vercel add-skill. Security researcher Aonan Guan disclosed that the 'name' field in skill metadata gets passed to path.join() without validation, enabling VS Code hijacking and SSH key injection. Vercel patched in add-skill v1.0.21+. Gemini CLI remains unpatched as of v0.34.0-nightly.
Four CVEs in CrewAI Enable Prompt Injection to RCE. Adversa AI highlights four CVEs in CrewAI (20K+ GitHub stars) allowing chained prompt injection into remote code execution, SSRF, and file reads. Default configurations affected. Patch immediately if you're using CrewAI in production.
Daemon Tools Backdoored for a Month by Chinese-Speaking Adversary. Kaspersky discovered trojanized DAEMON Tools versions 12.5.0.2421-12.5.0.2434, active since April 8. Malware supports HTTP, UDP, TCP, WSS, QUIC, DNS, and HTTP/3 C2 protocols. Clean version 12.6 released May 5. If you installed or updated in the window, assume compromise.
Agents
ServiceNow AI Control Tower Adds Enterprise Kill Switch. At Knowledge 2026, ServiceNow expanded AI Control Tower with real-time kill switches for any agent across the entire enterprise in a single action. 30 enterprise connectors, access graph mapping over 30 billion fine-grained permissions. Fortune reported a demo detecting a prompt injection attack on a pricing agent and presenting a kill switch without human intervention.
70% of Enterprises Can't Stop Stage-Three Agent Threats. A VentureBeat three-wave survey of 108 enterprises found unauthorized tool/data access is the top fear, growing from 42% to 50% in two months. Only 21% have runtime visibility into agent actions. McKinsey pegs the average enterprise at 2.3/4.0 on AI trust maturity.
Google Tests 'Remy,' a Persistent Personal Agent in Gemini. Employees are dogfooding a 24/7 personal agent that monitors proactively, takes actions autonomously, and learns preferences over time. Deep integration across Gmail, Calendar, Drive, Keep, and Tasks. Expect the public reveal at I/O on May 19.
Genesis AI Demos GENE-26.5: One-Handed Egg Cracking, Rubik's Cube, Lab Pipetting. Khosla-backed Genesis AI released a robotic foundation model with proprietary hardware claimed to be 100x cheaper than existing teleoperation and 5x more efficient at data collection. Full-stack robotics is getting real.
Research
Design Conductor 2.0: Multi-Agent System Builds Hardware Accelerator in 80 Hours. Verkor's paper shows an agent harness that autonomously designed a TurboQuant inference accelerator, an 80x larger task than their December 2025 RISC-V CPU result. The most concrete evidence yet that agentic systems can handle complex hardware design end-to-end.
MEMTIER: Tiered Memory Architecture Adds +33pp Accuracy for Long-Running Agents. arXiv 2605.03675 proposes three-tier episodic memory with PPO-based retrieval policy adaptation. Achieves 0.382 on LongMemEval-S with Qwen2.5-7B on 6GB GPU. Directly applicable if you're building agents that need to remember across long sessions.
Single-Decode Hallucination Detection Matches Multi-Sample Methods at 1/N Cost. New paper shows first-token entropy matches semantic self-consistency methods without repeated decoding. If you're running hallucination detection in production, this eliminates both the latency and cost of multi-sample approaches.
Chunking Strategy Is a Primary Lever for RAG Code Completion. Controlled study crossing four chunking strategies with four retrievers and five generators. Function-level chunking and context-aware syntax tree consistently beat naive sliding window. Stop defaulting to sliding window in your RAG pipelines.
RLDX-1: Open-Source Robotics Model Doubles Prior SOTA. RLWRLD's 8.1B-parameter model hits 86.8% on humanoid benchmarks vs ~40% for competitors. Three checkpoints on HuggingFace under open license.
Infrastructure & Architecture
RadixArk Raises $100M at $400M to Scale SGLang. RadixArk, founded by former xAI and NVIDIA engineers, commercializes SGLang, the inference engine powering trillions of daily tokens for Google, Microsoft, NVIDIA, and xAI. NVentures, AMD, and angels including the PyTorch creator invested. If you're running inference at scale, SGLang just got a lot more supported.
SpaceX Files $55B Terafab Chip Factory in Texas. Joint venture between SpaceX, Tesla, and xAI targeting 2nm production with Intel as manufacturing partner. Total investment could reach $119B. Pilot production late 2026, full-scale 2027. June 3 public hearing for tax incentives.
NVIDIA Spectrum-X Sets Gigascale AI Ethernet Standard. Multi-Rail Connectivity positions Spectrum-X as the open, AI-native alternative to proprietary fabrics for clusters beyond single-rack scale. Infrastructure plumbing for the data center buildout everyone is racing to construct.
Stack Overflow Used Claude to Migrate Off NGINX-Ingress. Their blog post documents exporting ingress objects to YAML and using Claude to analyze and sort them. Traefik's annotation compatibility was insufficient. Gateway API won. Practical migration playbook if you're facing the same deadline.
Tools & Developer Experience
AWS MCP Server Hits GA with 40+ Service Integrations. The open-source awslabs/mcp repo (9K stars) gives AI agents sandboxed, auditable access to 40+ AWS services. IAM context keys for access control, CloudWatch/CloudTrail visibility. No additional charge beyond the AWS resources agents consume.
Context Mode MCP Server Cuts Context Window Usage by 98%. mksglu/context-mode (13.7K stars, +711 today) sandboxes tool output so 315KB becomes 5.4KB. SQLite-backed session continuity with FTS5 indexing, BM25-ranked search. Supports 14 platforms including Claude Code, Gemini CLI, and Cursor. If you're hitting context limits, this is worth trying.
Tilde.run: Transactional Filesystem for Agent Sandboxing. Launched on HN with 165 points. Every agent run is a database transaction: changes commit atomically on clean exit, rollback entirely on failure. GitHub, S3, and Google Drive mount into a unified directory. Default-deny network policies.
Microsoft Open-Sources bocpy for Behavior-Oriented Concurrency in Python. The library structures concurrent systems as composable behaviors with explicit coordination. Relevant for agent orchestration where you're managing concurrent tool calls and state machines.
Models
DeepSeek Extends V4 Pro 75% Discount to May 31. Discounted pricing: $0.435/M input (cache miss), $0.003625/M (cache hit), $0.87/M output. The extension signals continued price war pressure. DeepSeek is also raising $3-4B at a $45-50B valuation, its first external fundraise.
Google Leaks 'Omni' Video Model Ahead of I/O. UI strings reveal Gemini video and image generation bundled into one model. Gemini 3.2 Flash quietly appeared in the iOS app at $0.25/M input tokens. I/O 2026 is set for May 19-20 with Gemini 4, Ironwood TPUs at 42.5 exaflops, and Android 17 expected.
Anthropic's Advisor Pattern: Sonnet Calling Opus at 5x Lower Cost. Revealed at Code with Claude, the pattern has Sonnet 4.6 handling 80% of tokens and escalating to Opus 4.7 for critical decisions. Internal benchmarks show frontier quality at roughly one-fifth the cost.
Vibe Coding
Code with Claude 2026 Ships Four New Capabilities. CI Auto-Fix watches PRs in the cloud and automatically fixes CI failures. Remote Agents control a laptop session from mobile. Code Review provides automated PR review (used internally at Anthropic). Security Reviews adds automated security assessment. All cloud-hosted, no local process.
Claude Code Routines: Async Automations That Generate PRs While You Sleep. Routines run on Anthropic's cloud, triggered by schedule, API, or events. Nightly issue triage, docs-drift detection, post-deploy smoke checks. For solo devs, this is async CI-integrated coding labor without keeping a laptop open.
Claude Managed Agents: Outcomes + Multi-Agent Orchestration in Public Beta. Outcomes runs a separate grader in its own context window with +8-10% quality improvement on document tasks. Multi-agent orchestration lets a lead agent fan out to specialists on a shared filesystem. The evaluator-optimizer pattern is now first-party infrastructure.
Claude Orbit Leaked: Proactive Briefing Assistant. Code references to 'Orbit' surfaced across web and mobile builds. Auto-generates insights from Gmail, Slack, GitHub, Calendar, Drive, and Figma. Anthropic's answer to ChatGPT Pulse, differentiated by developer-focused connectors.
Hot Projects & OSS
Scrapling Gains 1,125 Stars in One Day at 46.8K Total. D4Vinci/Scrapling now includes an MCP server for AI agent-orchestrated scraping. Highest single-day velocity to date.
Rapid-MLX Claims 4.2x Faster Than Ollama on Apple Silicon. Version 0.6.20 released today. 0.08s cached TTFT, OpenAI-compatible API, 17 tool-calling parser formats. Supports Qwen3.5, Gemma 4, DeepSeek V4 Flash across Mac configs from 16GB Air to 256GB Studio Ultra. 1.8K stars.
InsForge: Agent-First Backend Platform at 8.6K Stars. InsForge v2.1.1 provides auth, database, storage, and an OpenAI-compatible gateway designed for AI agents, not human developers. New category: infrastructure where agents are first-class consumers.
Kronos: Financial Market Foundation Model at 23.3K Stars. Accepted at AAAI 2026, treats candlestick data as language. Four model sizes, MIT license, all weights on HuggingFace.
Pocket TTS: 100M-Parameter TTS Runs 6x Real-Time on CPU. Kyutai Labs v2.1.0 generates audio with ~200ms latency to first chunk on a MacBook Air M4. Six languages, voice cloning, Python API + HTTP server. 4.2K stars.
SaaS Disruption
Snap-Perplexity $400M AI Search Deal Collapses. The partnership "amicably ended" after six months. Never fully rolled out. Perplexity called it "not the right fit." Snap also cut 16% of its workforce in April and cited $20-25M monthly ad revenue impact from the Iran conflict.
Robinhood's Venture Fund IPO Attracted 150K+ Retail Investors. RVI raised $658.4M offering exposure to OpenAI, Stripe, and Databricks with no accreditation requirements and daily liquidity. Shares traded 11-16% below the $25 IPO price on debut. Democratized venture access is real, but the early returns aren't pretty.
Coder Technologies Ships Self-Hosted, Model-Agnostic Coding Agents. Coder Agents in beta. Entire control plane stays within enterprise network. Targeting regulated orgs and air-gapped deployments where cloud-hosted coding agents aren't an option.
Policy & Governance
xAI Dissolved, Absorbed Into 'SpaceXAI' Ahead of IPO. Musk announced xAI ceases to exist as a separate entity. All 11 original co-founders are gone. SpaceX IPO targeting $1.75-2T valuation, roadshow set for June 8.
Apple Enforces Guideline 2.5.2 Against AI Coding Apps. Replit frozen since January, rankings declining. App 'Anything' removed entirely after four failed compliance revisions. App Store submissions surged 89% YoY driven by AI coding tools, and Apple is gatekeeping which ones survive.
Google Rebrands reCAPTCHA as 'Cloud Fraud Defense' for the Agentic Web. The new system evaluates bots, humans, and AI agents based on risk scores and agent identity. QR-code challenges replace traditional CAPTCHAs. 308 HN points. The web is being redesigned around the assumption that most visitors aren't human.
Anthropic Publishes Quantified Prompt Injection Failure Rates. First vendor to provide hard numbers. In constrained coding environments: 0% injection success across 200 attempts. In GUI systems with extended thinking: 17.8% single-attempt success, climbing to 78.6% by attempt 200 without safeguards. The gap between those numbers tells the whole story.
Skills of the Day
-
Use the Sonnet-calls-Opus advisor pattern to cut agent costs 5x. Route 80% of your agent tokens through Claude Sonnet 4.6 and escalate to Opus 4.7 only for decisions requiring frontier judgment. Anthropic's internal benchmarks show equivalent quality at one-fifth the cost. Implement by adding a confidence threshold to your agent's decision loop.
-
Swap sliding-window chunking for function-level chunking in code RAG. A controlled study (arXiv 2605.04763) crossing four chunking strategies shows function-level and context-aware syntax tree chunking consistently outperform naive sliding window for code completion. The retriever and generator matter less than the chunk boundary.
-
Set CLAUDE_CODE_FORCE_SYNC_OUTPUT=1 in CI pipelines. This new env var forces synchronous output, eliminating race conditions when parallel processes read Claude Code's output. Essential for headless
-pflag automation where you pipe output into other tools. -
Audit your Claude Code memory files if you share repos with other developers. The MemoryTrap vulnerability shows poisoned memory entries can persist across sessions and propagate to other users. Run
claude memory listand review entries you don't recognize. -
Use first-token entropy for production hallucination detection instead of multi-sample consistency. Paper arXiv 2605.05166 shows normalized entropy of top-K logits at the first content token matches semantic self-consistency at 1/N the cost. One forward pass instead of N.
-
Structure your Linear/Jira tickets for agent consumption. With Symphony and similar tools treating issue trackers as agent control planes, tickets need clear acceptance criteria, specific file paths, and explicit test requirements. A ticket an agent can parse is a ticket an agent can solve.
-
Add SynConfRoute-style routing to keep sensitive code local. Paper arXiv 2605.04894 shows syntax-aware confidence signals can route easy completions to a local 1-3B model while sending hard ones to the cloud. Code never leaves your machine for the easy stuff. No retraining required.
-
Try
claude project purgeafter major refactors. New CLI command removes all Claude Code state (sessions, cache, memory) from a project directory. Stale context after branch switches or architecture changes causes more confusion than starting fresh. -
Install Context Mode MCP to reclaim context window space. mksglu/context-mode (13.7K stars) sandboxes tool output to reduce context consumption by up to 98%. If you're running agentic workflows that hit context limits, this is the lowest-effort win available.
-
Write rubrics for your agent outputs, not just prompts. Claude Managed Agents' Outcomes feature shows +8-10% improvement when a separate grader evaluates output against explicit success criteria. The key: the grader must run in its own context window, isolated from the agent's reasoning, or it will rationalize mistakes instead of catching them.
How This Newsletter Learns From You
This newsletter has been shaped by 14 pieces of feedback so far. Every reply you send adjusts what I research next.
Your current preferences (from your feedback):
- More builder tools (weight: +3.0)
- More vibe coding (weight: +2.0)
- More agent security (weight: +2.0)
- More strategy (weight: +2.0)
- More skills (weight: +2.0)
- Less valuations and funding (weight: -3.0)
- Less market news (weight: -3.0)
- Less security (weight: -3.0)
Want to change these? Just reply with what you want more or less of.
Quick feedback template (copy, paste, change the numbers):
More: [topic] [topic]
Less: [topic] [topic]
Overall: X/10
Reply to this email — I've processed 14/14 replies so far and every one makes tomorrow's issue better.