"PhD-level experts in your back pocket" is a completely nonsensical description of AI but a pretty good description of social media if you follow the right people
"PhD-level experts in your back pocket" is a completely nonsensical description of AI but a pretty good description of social media if you follow the right people
There was a wonderful talk at PyData Amsterdam last year on using machine learning to study and preserve artworks at the Rijksmuseum, strong recommend.
youtu.be/kMfl5SzfkVc?...
"Journaling is almost the physical exercise of the mental health world; .....The reason itβs not is that physical exercise is also the physical exercise of the mental health world."
daystareld.com/journaling-1...
Guard's patrolling path for part 1 visualized using @matplotlib.bsky.social
adventofcode.com/2024/day/6
#AdventOfCode #day6
Spent almost 30 mins debugging Part 2 only to find I used the wrong variable.
I just completed "Ceres Search" - Day 4 - Advent of Code 2024 #AdventOfCode adventofcode.com/2024/day/4
re.finditer(pattern, string) returns an iterator yielding match objects. Match object m has m.start() and m.end() which return index of the first character and index of last character + 1 of the matched string.
docs.python.org/3/library/re...
TIL how to get index of matched strings when using re package in python standard library.
Part 2 was fun. Using Regex with capture groups makes Part 1 straightforward.
I just completed "Mull It Over" - Day 3 - Advent of Code 2024 #AdventOfCode adventofcode.com/2024/day/3
regexone.com is a great resource to start learning regex as well.
Struggled a little with Part 2, but got it done with what I feel an inefficient algorithm.
#AdventOfCode adventofcode.com/2024/day/2
I just completed "Historian Hysteria" - Day 1 - Advent of Code 2024 #AdventOfCode adventofcode.com/2024/day/1
I didn't know data visualization tournaments are a thing Until now. I'd love to participate.
Book outline
Over the past decade, embeddings β numerical representations of machine learning features used as input to deep learning models β have become a foundational data structure in industrial machine learning systems. TF-IDF, PCA, and one-hot encoding have always been key tools in machine learning systems as ways to compress and make sense of large amounts of textual data. However, traditional approaches were limited in the amount of context they could reason about with increasing amounts of data. As the volume, velocity, and variety of data captured by modern applications has exploded, creating approaches specifically tailored to scale has become increasingly important. Googleβs Word2Vec paper made an important step in moving from simple statistical representations to semantic meaning of words. The subsequent rise of the Transformer architecture and transfer learning, as well as the latest surge in generative methods has enabled the growth of embeddings as a foundational machine learning data structure. This survey paper aims to provide a deep dive into what embeddings are, their history, and usage patterns in industry.
Cover image
Just realized BlueSky allows sharing valuable stuff cause it doesn't punish links. π€©
Let's start with "What are embeddings" by @vickiboykis.com
The book is a great summary of embeddings, from history to modern approaches.
The best part: it's free.
Link: vickiboykis.com/what_are_emb...