LinkedIn is notoriously difficult as their APIs are locked behind mysterious partnership contracts. YouTube is a little more straightforward and their documentation is decent. No comment about X as I left that platform years ago...
LinkedIn is notoriously difficult as their APIs are locked behind mysterious partnership contracts. YouTube is a little more straightforward and their documentation is decent. No comment about X as I left that platform years ago...
if you set up a filming location in europe, you already know you'd be booked out for months!
So you DO have the data?? ๐คฃ
Words feel very insufficient in describing the impact you have had on my life.
Thank you for everything and see you in chat ๐
I know of quite a few spaces in Zurich, as we run AnalyticsCamp events since quite a few years. Don't hesitate to ask if you need a hand :D
If you'd like to meet up while in Zurich, just let me know! :D
For me this is not just "opinionated" but simply optimising for a single criteria. I suppose other approaches with a single-criteria focus could be "security" or "privacy", where all other factors are essentially ignored...
...data team members might ingest and load data by dumping CSV files to their machines from sources and then manually import them to the database
...they might run Metabase in a Docker container and run it off a laptop that sits in the office, that employees VPN into to access dashboards...
If the primary restriction is budget, then all tool choices will be made with this criteria in mind. For example, all analysts might run local IDEs and connect to a production Postgres database that they share with developers...
Don't forget about budget data stack, where the only thing that matters is CapEx per month/year
OK!! #DataBS conf starts in 45min and I'm gonna be busy hosting, SO, things to know:
Chat is on discord, NOT zoom: bit.ly/databs-chat
Last min registration: us02web.zoom.us/webinar/regi...
Official Schedule: databsconf.com/schedule/
Bit last minute (my bad) but anyone keen to attend a free online data nerds conference tomorrow? :D
i cannot stop laughing at this
I use time.is a lot for their spreadsheet-like view (time.is/compare) and it's awesome
its about how to pronounce Bulbasaur isn't it
The blob of toothpaste on your toothbrush is called a nurdle.
that would be an insanely awesome session if you combine it (somehow) with some data stuff ;)
oh boo :( did you try some of the aliases too in the dropdown?
sharklasers
you're welcome :D
I'm at 0.13 ๐
wussup @sentimentbot.bsky.social
When using it downstream, you can do COUNT(unique_eventid) and not have to worry about duplicates, but if you want to split by dimension you'll need to build a way to select a key from the JSON and do JSON_EXTRACT() - we did that process using count.co fairly easily in a canvas.
For example, I designed a compromise by creating a user actions table that has a JSON column for additional dimensions. As dimensions change per action (i.e. onboarding stuff shows which channel someone came from, but conversion actions have revenue values) this was a good way to overcome this.
I do believe it's possible to store a lot of logic in the pre-processing stages before data is used in reporting, data products etc. but as others have mentioned, it's always a question of tradeoff.
To answer the original question, I think the term is a little silly. We can just name things more explicitly, i.e. "moving logic upstream to the warehouse" rather than rely on the notion that everyone will know what this marketing slogan means.
This term only makes sense if you draw your diagrams from left to right, which is a common assumption that we do this in the West but I am not sure about countries that write right to left...
If I told you I had it and that you should trust me, would you believe me?
I'm not even sure data matters anymore