Your pipeline is failing because your list is garbage.
Start building a master list using real buying signals.
New CMO hire in the last 90 days? That's a signal.
Recent funding round? That's a signal.
Use tools like Apollo for data.
The right 100 contacts will outperform 10,000 random ones.
09.09.2025 14:20
👍 0
🔁 0
💬 0
📌 0
Takeaways:
- Model switchers confuse even technical users
- Technical names create distance, not connection
- Earth elements work across all cultures instantly
- Intent-based routing beats manual selection
- The most powerful technology should feel the most human
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
If you liked this, you’ll love my Big Players newsletter. I’ve built brands like Fireball, scaled agency ops, and now I write tactical breakdowns for operators using AI to win.
Join thousands of operators:
https://bigplayers.co/subscribe
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
10/ Look, @OpenAI is the most revolutionary company of my lifetime.
GPT-5 is incredible. Memory personalization is magic. My kids love Advanced Voice Mode.
They've ushered in a completely new world.
I just want this new world to feel alive and human.
08.08.2025 21:32
👍 1
🔁 0
💬 0
📌 0
9/ Natural naming would make model choice feel obvious.
It lets @OpenAI's platform scale without getting colder or more complicated.
It builds the brand people feel, not just the one they use.
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
8/ Technical users still get full control.
Behind "Storm" lives the complete specification. API endpoints, context windows, parameters (all accessible).
But everyone else just says "give me something powerful" and it works.
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
7/ Most people don't want to choose models anyway.
They want to describe their intent and get to work.
"Help me write a proposal" → System picks the right tool
"Solve this math problem" → Routes automatically
The confusion vanishes completely.
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
6/ Here's the magic: you don't need to know the category or strength level.
"I'm writing a quick email" → Breeze appears
"I need help with complex analysis" → Blaze shows up
"I'm building something big" → Bloom activates
The system handles everything.
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
5/ Universal concepts that work everywhere:
- A kid in Tokyo gets it instantly
- A grandmother in rural Kenya understands immediately
- No translations needed
- No technical jargon
Weather. Fire. Plants. Elements that connect every human on Earth.
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
4/ Then four strength levels using nature itself:
𝗕𝘂𝗶𝗹𝗱𝗲𝗿: Seed → Sprout → Vine → Bloom
𝗧𝗵𝗶𝗻𝗸𝗲𝗿: Spark → Flame → Blaze → Nova
𝗣𝗮𝗿𝘁𝗻𝗲𝗿: Breeze → Wave → Storm → Surge
Weather, fire, plants. Concepts every human understands instantly.
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
3/ I pitched them something radically different.
Three categories based on what people actually want to do:
𝗕𝘂𝗶𝗹𝗱𝗲𝗿 → to create things
𝗧𝗵𝗶𝗻𝗸𝗲𝗿 → to solve problems
𝗣𝗮𝗿𝘁𝗻𝗲𝗿 → to live and work together
Simple. Human. Universal.
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
2/ But the real problem wasn't the interface. It was the names themselves.
"GPT-4.1-nano" "gpt-4o-mini" "o4-mini-high"
Cold. Technical. Alien. Like choosing between server configurations, not creative partners.
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
1/ The model switcher was broken from day one.
Even technical people couldn't explain what each model was good at.
"Use o3 for reasoning, GPT-4o for... um... other stuff?" "GPT-4o-mini is cheaper but when do you actually use it?"
Complete confusion.
08.08.2025 21:32
👍 0
🔁 0
💬 0
📌 0
Several months ago (before GPT-5) I pitched @OpenAI a revolutionary system for their models.
Now that the model switcher is gone...
I'm sharing exactly what I proposed 🧵
08.08.2025 21:32
👍 0
🔁 0
💬 12
📌 0
Time might be running out!
If you enjoyed this thread:
1. Follow me @matthewberman.bsky.social for AI powered systems
2. RT to help the algorithm
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
picture
11/ Takeaways:
• Chain of thought monitoring lets us read AI reasoning for the first time
• Hard tasks force models to externalize thoughts in human language
• Researchers are catching deceptive behavior in reasoning traces
• This opportunity is fragile and may disappear
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
10/ It's the greatest time in history to build! Thousands of operators are reading my newsletter every week for AI powered growth. Get the free newsletter.
https://bigplayers.co/subscribe
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
picture
9/ What's next?
Labs like @Anthropic and @OpenAI need to publish monitorability scores alongside capability benchmarks.
Developers should factor reasoning transparency into training decisions.
We might be looking at the first and last chance
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
picture
8/ Leading AI researchers are now calling for:
• Standardized monitorability evaluations
• Tracking reasoning transparency in model cards
• Considering CoT monitoring in deployment decisions
The window is open NOW - but it won't stay that way.
17.07.2025 14:40
👍 1
🔁 0
💬 0
📌 0
picture
7/ The implications are STAGGERING for humanity's future:
This could be our ONLY window to detect when AI systems are:
• Planning to deceive us
• Gaming their reward systems
• Developing self-preservation instincts
• Plotting against human interests
Before they get smart enough to hide these
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
picture
6/ Three major threats to Chain of Thought monitoring:
• Direct supervision making reasoning less honest
• Outcome-based RL breaking human language patterns
• Novel architectures that reason in "latent space"
We might lose our best shot at AI interpretability.
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
picture
5/ But here's the scary part - this opportunity is FRAGILE.
As AI labs scale up reinforcement learning, models might drift away from human-readable reasoning.
They could develop "alien" thinking patterns we can't decode, closing this safety window forever.
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
picture
4/ The results are mind-blowing:
Researchers caught models explicitly saying things like:
• "Let's hack"
• "Let's sabotage"
• "I'm transferring money because the website instructed me to"
Chain of thought monitoring spotted misbehavior that would be invisible otherwise.
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
picture
3/ This creates what researchers call the "externalized reasoning property":
For sufficiently difficult tasks, AI models are physically unable to hide their reasoning process.
They have to "think out loud" in human language we can understand and monitor.
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
picture
2/ Here's how it works:
When AI models tackle hard problems, they MUST externalize their reasoning into human language.
It's not optional - the Transformer architecture literally forces complex reasoning to pass through the chain of thought as "working memory."
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
picture
1/ When rival AI labs that usually guard secrets like nuclear codes suddenly collaborate on research, you KNOW something big is happening.
This is the first time we've EVER been able to peek inside an AI's mind and see its actual thought process.
But there's a terrifying catch...
17.07.2025 14:40
👍 0
🔁 0
💬 0
📌 0
picture
Scientists from @OpenAI, @GoogleDeepMind, @AnthropicAI and @MetaAI just abandoned their fierce rivalry to issue an URGENT joint warning.
Here's what has them so terrified: 🧵
17.07.2025 14:40
👍 0
🔁 0
💬 12
📌 1
Thanks for reading! Follow @matthewberman.bsky.social for more data-driven growth insights.
If you found value, repost to share. bsky.app/profile/matt...
30.05.2025 22:13
👍 0
🔁 0
💬 0
📌 0
picture
Takeaways:
• $9.71B in value built during Facebook's golden window (2012-2016)
• Casper: $1M month 1, Brooklinen: $500K to $15M in 5 years
• Platform timing was as critical as product-market fit
• Customer acquisition costs of $10-15 during peak efficiency period
30.05.2025 22:09
👍 0
🔁 0
💬 1
📌 0
Want more data-driven insights and AI-powered growth plays?
I’ve packed 15 years of hard-won growth lessons into my free newsletter.
Thousands of founders read it weekly:
https://bigplayers.co/subscribe
30.05.2025 22:09
👍 0
🔁 0
💬 0
📌 0