We are so excited to have @veerle.hypebright.nl on board to help lead the way!
@athlyticz
https://athlyticz.com π What We Offer: Dive deep into data science and programming concepts, uniquely taught through captivating sports analytics examples, applicable to all industries such as engineering, finance, sports, medicine, and more!
We are so excited to have @veerle.hypebright.nl on board to help lead the way!
Appreciate the conversations with @christophsax.bsky.social and @davidgranjon.bsky.social to get this done. In the coming months, youβll hear more about our vision for using blockr in sports, followed by workshop offerings directly from Cynkra on the Athlyticz platform.
Onward. π
A few things that caught our attention:
β Build data apps in minutes with drag-and-drop blocks
β Each block handles one step: reading, transforming, visualizing
β Fully extensible, if you can code, you can build custom blocks
Read more about the project here: www.cynkra.com/blockr/
Think of it as visual programming powered by R, accessible to analysts who want to wrangle data and build dashboards without writing scripts. Itβs funded by @bms-news.bsky.social , battle-tested in pharma, and now coming to sports.
Cynkra will be leading workshops on blockr, their open-source framework for building data pipelines using a visual, point-and-click interface. No code required.
Excited to share that @cynkra.bsky.social has officially signed on as a Preferred Partner with Athlyticz. π€
What does this mean?
Our goal is to bring the strongest teams and individuals to our students, people at the forefront of data science tools that we believe can be game-changers in sports.
More physics-constrained Bayesian models lead to another interactive mobile app with my students.
Writeup coming soon
Why is AI so slow? Bottlenecks aren't compute; it's memory.
towardsdatascience.com/the-stranges...
TiDAR = 6x speedup. β‘ Move from prompts to Infrastructure Eng.
#AI #LLM #DataScience #Engineering #Athlyticz
Can anyone guess what we are building for our students?!
#data #datascience #sportsanalytics #rstats #python
For students & early-career analysts applying to front offices: the model is the core of the work, that's where the rigor lives. But what puts you over the top is showing you can translate that model into expert storytelling, especially in an interview when you're walking someone through your work.
This is also season-long and context-neutral, no L/R splits. A matchup-level application would be a different beast entirely (automated daily pipelines, game-day lineup optimization, etc.).
Projections here are from an actual Bayesian framework that's been jittered/shifted to mask the information (so don't look too far into the values, i.e,. Judge projected 60 HR). This is a tooling demo. In production it pulls from an internal model stored in a database.
It ranks players against their position, flags credible interval overlap, & frames the output the way a front office would: uncertainty bands, replacement value, and roster construction implications. The goal is consistent, repeatable reports that a decision-maker can trust, not a chatbot summary.
It computes positional averages from the dataset itself, classifies hitter archetypes from K%/BB%/ISO combinations (elite contact-and-discipline, boom-or-bust power, patient on-base-driven, etc.), and adjusts its analysis based on defensive position.
On the AI agent in case anyone is curious, this isn't a generic LLM prompt. The scouting narrative engine is built with domain logic baked in.
β’ Head-to-head comparison tool with overlapping credible interval bars and league average benchmarks
β’ AI scouting agent that writes positional value reports, factoring in projection uncertainty, roster construction, and positional scarcity
β’ 1-click PDF 1-pagers for trade deadline prep/meetings
β’ Filterable player cards with triple slash lines, K%/BB%/ISO percentile circles, OPS credible intervals (in practice wRC+, wOBA, etc. are others to put in this area), this is illustrative for showcasing intervals relative to league average (or any average you choose)
Spring training is here.
Every analytics department has projections and looks at them in different ways.
Here's a quick concept app I put together for my students with the Rapid App Prototyping strategies I've been documenting: TLDR, ingest full-season player projections and make them usable.
Note: I am not a hockey analyst so had to do some research. I am sure there's much cooler stuff you can do here!
For the rink I ingested this pdf into claude and extracted key measurements before building out in D3
hockeymanitoba.ca/wp-content/u...
Good luck to all entering :)
If you're a student sitting on the fence, just start. You don't need a perfect submission. You need a question and the willingness to get your hands dirty with some great data.
The competition is open to undergrads, grad students, and independents. Link below.
stathletes.com/big-data-cup
What do you do with 30fps positional data across 60 minutes? How do you turn that into something a GM can look at for 90 seconds and walk away smarter?
That's the skill. And the Big Data Cup is one of the few places where students get real data to practice it.
Here's why I think working with tracking data matters:
The gap between "I can run model" and "I can build something a coach would use" is where careers in sports analytics are made. Tracking data is messy. The interesting work isn't the model, it's the decisions you make before the model.
"A Clockwise Stone Drifts Right. That's Wrong."
athlyticz.substack.com/p/a-clockwis...
Curling was on TV this weekend. I went down a rabbit hole. Three physics PDFs later, I had a full interactive simulator with Bayesian model comparison running in the browser.
#bayesian #curling #olympics
The future isn't about writing code. It's about engineering the systems that write it for you.
Are you building a workflow or just a prompt?
#ClaudeCode #AI #SoftwareEngineering #Athlyticz #DataScience #LLMs #CompoundingEngineering #DeveloperTools
β Verification Loops: The secret to 3x quality is giving Claude a way to verify its work (tests, bash commands, UI checks) before you see it.
At @Athlyticz, we build the live Positron environments and VMs where you actually master these workflows. You can't learn orchestration from a slideshow.
Here are the big 3 takeaways:
β Team Memory: They use a shared CLAUDE.md file to document mistakes. Claude learns from failures
β Orchestration: Boris manages up to 15 agents in parallel. He isn't "chatting" with AI: he's managing a fleet across terminal tabs & web sessions
Most people use AI to write code. Boris Cherny uses it to build a system that writes code for him.
The creator of Claude Code just shared his "Compounding Engineering" setup: x.com/bcherny/stat...
Your research team doesnβt just produce data. It produces knowledge.
But where does that go when a researcher moves on?
Most teams rely on "Internal Folklore": the stuff that only exists in Slack or one personβs head.
The best research teams treat documentation (Notion) as their "Team Memory."
How do you rank 51 F1 drivers when not everyone finishes every race?
Crashes happen. Engines fail. Some races have 22 finishers, others have 11. The data is incomplete and messy.
Traditional methods choke on this. Elo needs head-to-head pairs. Average finish position ignores who you raced against.
320 free workshop spots in February.
We'll build production D3 visualizations from scratch using Claude, you watch, then carry forward your own projects.
This NFL field viz took 15 minutes. 700 lines. Spec-accurate.
Students trying to break into sports analytics: this one's for you.