LLMs made machines behave more like a human, which is not necessarily a good thing.
Machines used to be deterministic and reliable, humans aren't.
LLMs made machines behave more like a human, which is not necessarily a good thing.
Machines used to be deterministic and reliable, humans aren't.
Right now I have plain html+javascript served by nginx.
This already it a mess of ifs and I only have 3 buttons.
I'm not frontend guy, so I'd prefer something non complex.
#include <type_traits> template<typename ...Ts> struct types; template<typename ...Ts> struct unpack { using type = types<Ts...>; }; template<typename ...Ps, typename ...Ts> struct unpack<types<Ps...>, Ts...> { using type = types<Ps..., Ts...>; }; template<typename H, typename ...Tail> struct rev { using rest = typename rev<Tail...>::type; using type = typename unpack<rest, H>::type; }; template<typename T> struct rev<T> { using type = types<T>; }; struct Point { int x; int y; }; int main() { static_assert(std::is_same_v<rev<int, bool, Point>::type, types<Point, bool, int>>); static_assert(std::is_same_v<rev<int, bool, char>::type, types<char, bool, int>>); static_assert(std::is_same_v<rev<int, int>::type, types<int, int>>); static_assert(std::is_same_v<rev<int>::type, types<int>>); }
Reversing the order of types in a parameter pack in #cplusplus (nostd, 100% organic code)
Investigating straightforward, low overhead, quick to start reactive UI framework. Any recommendations?
Live updates: ON/OFF button
You can now disable live updates on hive-index.
Sometimes I wonder whether we should thank Dennis Ritchie for designing C or curse him.
If C wasn't created we all could be writing in descendants of Modula, ML or Lisp.
-- Lux Language Reference (brief) -- ββββββββββββββββββββββββββββββ -- Types -- pixel Packed ABGR u32 or decoded {r,g,b,a} float channels -- image 2-D grid of pixels with .width, .height, .data -- color Named vector in a colorspace (e.g. yiq.y, yiq.i, yiq.q) -- region Rectangular area {x1, y1, x2, y2} -- scalar f64 numeric value
I asked Claude to investigate odiff image comparison algorithm and design a language that express the same in less code. This is what it did.
"The Lux version is roughly 3x shorter than the Zig source while remaining unambiguous"
gist.github.com/serpent7776/...
# Custom JSON ID Count Percent Latest Block 1 sm_market_rent 358,968,236 13.8% 104,426,058 2 ssc-mainnet-hive 344,710,658 13.2% 104,426,918 https://index.hive-adventures.com/
hive-index has now percent column.
Looks like sm_market_rent and ssc-mainnet-hive take 27% of all custom json operations.
An orange tabby cat peeks out from the center of a massive, tangled ball of colorful wires, yarn, and mechanical gears. The ball sits on a cluttered wooden workbench surrounded by soldering irons, circuit boards, and books. A thought bubble above the cat's head reads, "COMPLEXITY KILLED THE CAT."
"Complexity killed the cat"
--allow-empty Usually recording a commit that has the exact same tree as its sole parent commit is a mistake, and the command prevents you from making such a commit. This option bypasses the safety, and is primarily for use by foreign SCM interface scripts. https://man.freebsd.org/cgi/man.cgi?sektion=0&manpath=FreeBSD%2013.1-RELEASE%20and%20Ports&arch=default&format=html&query=git-commit
#TIL about git commit --alow-empty
hive-index now flashes green briefly for updated entries.
What's the point of ORMs when I can change underlying database with one prompt?
AI: I can't look up live data from this environment.
Me: why can't you look it up?
AI: You're right, let me try fetching it.
Me: π€¦
https://index.hive-adventures.com/
I build a real time custom json operations dashboard for Hive blockchain.
GPT5.3 Codex X-High be like:
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Thoughts
Error while editing /dev/null
Why tf are you trying to edit /dev/null GPT?
I hit a chain of internal action errors by repeatedly trying to update non-existent tracked items, and the environment halted further actions for this turn. How do you want to proceed? I can continue without further workspace actions and just give you the exact run/verify steps for the new probe binary. Or you can ask me to resume normal editing/checking in the next turn, and Iβll continue from there.
tf that even means GPT?
From a high-level perspective, table storage is organized into a four-layer hierarchy: β’ A Table is horizontally partitioned by rows into multiple Row Groups. β’ A Row Group is vertically partitioned by columns into multiple Column Data objects. β’ A Column Data object is further horizontally partitioned by rows into multiple Column Segments. β’ A Column Segment represents the actual stored data. It typically corresponds to a single 256-KB Data Block but may also share a block with other segments. https://www.alibabacloud.com/blog/duckdb-internals---part-2-table-storage-format_602657
duckdb table storage layout.
module IntTuple = struct type t = int * int let compare (x0, y0) (x1, y1) = match Pervasives.compare x0 x1 with 0 -> Pervasives.compare y0 y1 | c -> c end module IntTupleSet = Set.Make(IntTuple)
There seriously isn't easier way to construct a set of tuples in ocaml?
Hold on, I need a unification algorithm.
The Conventional Wisdom If you follow the standard documentation, the logic is sound: Write Lock (.write()): Exclusive access. Only one thread moves at a time. Read Lock (.read()): Shared access. Multiple threads have access. In a read-heavy tensor workload, the RwLock should be a massive win. But on modern multi-core chips like the M4, the βcost of admissionβ for a Read Lock is much higher than you think. https://eventual-consistency.vercel.app/posts/write-locks-faster
Sounds wild.
btop cpu usage graph showing repeated sine-like usage with overlaps.
Quite funny cpu usage pattern.
1. Aggressively adopt AI while staying accountable Every employee, from engineering to G&A and GTM, is expected to actively adopt AI tools to accelerate productivity and decision-making. The world is moving quickly, and we must lean into change. Crucially, this does not transfer responsibility to the tool. For example, engineers building large-scale production software remain accountable for correctness, design quality, and long-term maintainability. AI expands our capabilities, but it does not outsource judgment. Work produced with AI should be understood, validated, and owned just as deeply as work written by hand. Reputation is still built on outcomes, not prompts. https://www.modular.com/blog/the-claude-c-compiler-what-it-reveals-about-the-future-of-software
Benefits expected of modular programming are: managerial β development time should be shortened because separate groups would work on each module with little need for communication; product flexibility β it should be possible to make drastic changes to one module without a need to change others; comprehensibility β it should be possible to study the system one module at a time.
Benefits of modular programming
pasta pasta (Pack A Subtle Tap Abstraction) provides equivalent functionality to network namespaces, as the one offered by passt for virtual machines. If PID or --netns are given, pasta associates to an existing user and network namespace. Otherwise, pasta creates a new user and network namespace, and spawns the given command or a default shell within this context. A tap device within the network namespace is created to provide network connectivity. For local TCP and UDP traffic only, pasta also implements a bypass path directly mapping Layer-4 sockets between init and target namespaces, for performance reasons. https://manpages.debian.org/unstable/passt/pasta.1.en.html
#TIL about pasta
rank | id | count | latest -----------------|--------------------|-----------------|----------------- 1 | sm_market_rent | 358959398 | 104116800 2 | ssc-mainnet-hive | 343401979 | 104116816 3 | sm_find_match | 210611059 | 104035459 4 | sm_start_quest | 199671426 | 98413509 5 | sm_submit_team | 181269329 | 103975542 6 | sm_claim_reward | 164555933 | 104116814 7 | follow | 159892243 | 104116809 8 | sm_gift_cards | 136488475 | 104116803 9 | sm_token_transfer | 92748678 | 104116815 10 | sm_market_purchase | 57437495 | 104116816 11 | sm_team_reveal | 56618599 | 82633041 12 | sm_stake_tokens | 48032881 | 104116811
Top 12 custom json operations on Hive blockchain.
Dictation allows you to speak your prompt to Claude. Your spoken words are then converted to text, which is used to prompt Claude as if you had typed them out. Learn more about using dictation here. After converting your speech input to text, we delete your audio recording but the text of your chat will be retained in accordance with our retention periods. We do not use your voice for training our models. If you have allowed us to use your chats or coding sessions to improve Claude, the transcribed text may be used in accordance with your privacy settings. Learn more. https://privacy.claude.com/en/articles/10067979-what-personal-data-is-collected-when-using-dictation-on-the-claude-mobile-apps
Is Claude the only one that explicitly says that it doesn't store voice recordings when using audio input?
The AI had read the converter, understood what it does, and written tests confirming that it behaves exactly as implemented. It verified that functions get converted, that state variables appear in the output, that control flow structures are present. Every assertion was technically correct. The converter does those things. But the tests never compared the output against the input. They never asked: βDoes the generated Solidity do the same thing as the original contract?β They confirmed the converter runs without errors. They did not confirm the converter produces correct results. The AI tested the code we had. Not the code we wanted. https://doodledapp.com/feed/ai-made-every-test-pass-the-code-was-still-wrong
This is exactly how you should NOT write tests.
GPT is definitely an introvert, it doesn't let any though outside.
Opus is an extrovert, it spits out everything in its though process.