Grateful for everyone, especially Matt Hildreth, for putting on this great @ruralorganizing.bsky.social event.
Grateful for everyone, especially Matt Hildreth, for putting on this great @ruralorganizing.bsky.social event.
3. Democrats must SHOW UP (with branding) in communities as contributors. This means having desks at rodeos, pitching in at community events, speaking at civic meetings, gaining competency, and running for office.
...
2. Democrats must inspire hard-working, independent, and caring voters (in that order). Too often we lead with "caring," but our brand already owns that sector. If anything, voters see us as "too caring."
...
1. Democrats lost the 2024 election (and face continuing headwinds) due to their failure to inspire young voters, especially young men (but young women, too), in rural counties.
...
Take aways from last weekend's Rural Organizing conference...
@davidshor.bsky.social is one of the most insightful speakers about voting trends, and what Democrats must do to win. Shor presented, with useful graph after useful graph, at Saturday's half-day Rural Organizing conference. The whole thing was energizing and thoughtful.
So next time someone suggests you examine some concept or build some new skill, let them know youβll put your minds to it.
I mentioned this to pal Errol Arkilic (UCI Chief Innovation Officer), and he said, Have you read βSociety of Mind,β by Marvin Minsky? That book seems a lot like this βmulti-headed attentionβ notion in Transformers.
Different minds (with different queries) are called βheadsβ in the Transformer architecture.
ut in a Transformer architecture, it gets a bit more complicated. Different minds will explore different queries, like maybe What fun stuff can *I* do in βAfricaβ? We might then get a different set of Keys (Key:drinksbeer or Key:entertaininglecturer for βJaneβ?)
This creates a cloud of meanings around each word, informed by the other words in the sentence. If youβre like me, that will make you pause for a minute and ask, βIs that what Iβm doing when I comprehend a sentence?β
Through repeated thinking, a Transformer forms a Query for each word, and answers that query with key/value vectors from each other word in the sentence (and, actually, even that same word).
But if Jane is an infant in the context of Africa, Key:deepunderstanding would likely have a low value.
There are other key-value pairs associated with this query about βAfrica.β For example, deepunderstanding might be another Key. If Jane is a primatologist, based on surrounding words, then Key:deepunderstanding might have a high value.
If the subject of the action in Africa is βJaneβ, then Key:subject for βJaneβ might have a high value (think: βJane is the likely subjectβ) related to Africa in this sentence.
Other words fill in some of the βfeaturesβ associated with that wordβs Query. We call each feature a Key. For example, one key could be the subject doing something in Africa.
each word poses some questions, and possibly one main question. For example, (depending on other words around it) βAfricaβ might prompt the question Whatβs happening in Africa? Call that Qβ¦ the Query.
Pal Justin Bejenaru and I are currently studying Transformer architecture (the algorithm behind ChatGPT). Itβs a crazy morass of neural network layers all talking to each other.
The big picture (from a human perspective) is this: when you decode a sentenceβ¦
Iβm checking it out on a flight tomorrow to Mauiβ¦
If there was a @mauidreamsdiveco.com BlueSky account, you could inform BlueSky users about events, etc. and pals could follow it. :) Point your social media folks at this post?
Currently completing @andrewyng.bsky.social's Deep Learning Specialization course 5: Sequence Models. I've really loved this course sequence (on Coursera). Ng is charming. It's all about cats (and linear algebra, Python, and AI celebrities).