Are people calling the Nvidia cosmos paper the next "attention is all you need" but for physical AI systems?
Are people calling the Nvidia cosmos paper the next "attention is all you need" but for physical AI systems?
A few years ago @vickiboykis.com posted a sample of post modern jukebox in this category.
In tech this year, one example was the first time I used @crmarsh.com `uv pip install`
I'm curious if folks have others?
But anyway, this vid from a few years ago is my recent go-to example
tldr www.youtube.com/watch?v=TRCJ...
End of year we tend to reflect on "Best of...". I've been thinking of _craft excellence_, eg cases where someone produces a masterpiece, an example of them just being excellent at their craft.
Day 17! Today's challenge is about conditional branching in data pipelines
youtu.be/MgkniMnmhMU
Day 16! youtu.be/mrlFdKJlosY
(sorry for no video on day 15... life ya know)
I see what you did there, and I am not honoring it with a heart reaction.
what makes me happy is thinking someone at HPE had to have a convo with AWS and SAP, where SAP argued it'd be easier to build a machine with >1000 CPU and 32TB of RAM than figure out how to run SAP on more than one node
Day 14 and mayyybbbbee my favorite day so far. Dynamic code gen and peeling back the onion on ways to combine different Python parallelization "layers" .... what more could you want for a Monday? youtu.be/azfCDBKTIoQ
Lucky Day 13 - all about incremental data processing
youtu.be/_WSOaE5mr4g
@loppsean.bsky.social's walk through of asset checks in Day 11 is awesome, do check it out!
In some cases, you might not want to re-read in the asset data (or you might even want to compare across previous materializations!)
In my example, I use Dagster's metadata system to do just that.
Day 11! The diff between orchestration and data orchestration? Your code can stay the same, still work, but the data can change and break everything! youtu.be/ugS1KQhxWrA
For day 10 of 30 days of orchestration, one of my fave topics: materializing assets in different environments using resources!
In my solution to day 10, I added an environment variable, DAGSTER_PROD_DEPLOY, which determines whether to use the folder `data_prod` or `data_dev` if it is not present.
Day 10! Data engineers + Pydantic classes = goodness. youtu.be/f_XfLTlHS-s
Day 4! (Its not too late to catch up!) youtu.be/Jaann2QiGwQ
but did you like santa or rudolph daggie?
Day 3! youtu.be/xUJTal1cq0Y
Learn about more complex DAG scheduling, and get a sneak peek of holiday Daggie
Probably not? I think there are generally 3 viable options:
- deploy to Dagster+ serverless: great for individuals playing around who want to run their hobby project
- deploy OSS Dagster: this quickly becomes a "choose your own adventure"
- deploy Dagster+ as a team in prod
Here is my solution: github.com/cnolanminich...
For this one, I wanted to play with the new `target` argument π€©
For the super common scenario of having one job per schedule, add a "target" argument and select the assets you want -- in my case "asset_one" and everything downstream
#dataBS
ffs @msft365.bsky.social . ffs.
For the benefit of customers that are familiar with Power BI, the table also includes Power BI Premium per capacity P SKUs and v-cores. Power BI Premium P SKUs support Microsoft Fabric. A and EM SKUs only support Power BI items.
The capacity and SKUs table lists the Microsoft Fabric SKUs. Capacity Units (CU) are used to measure the compute power available for each SKU.
Capacities are split into Stock Keeping Units (SKUs). Each SKU provides a set of Fabric resources for your organization. Your organization can have as many capacities as needed.
Me trying to figure out Power BI pricing
To share content and collaborate in Microsoft Fabric, your organization needs to have an F or P capacity, and at least one per-user license.
Day 2! Its a holiday week, lets talk how you accommodate business schedules with cron
youtu.be/Wja1EmyIexE