Jamie Tidman's Avatar

Jamie Tidman

@jamietidman

CTO at Japeto.ai - currently working on building secure language AI products for healthcare and local government.

241
Followers
1,090
Following
17
Posts
19.11.2024
Joined
Posts Following

Latest posts by Jamie Tidman @jamietidman

Europe != UK

21.10.2025 10:25 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Alexa powered by Amazon Titan sounds even less useful than it currently is. Titan is by far the worst commercial LLM I've used.

05.02.2025 21:30 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

There's a bit in a Silicon Valley where they give admin access to an Adderall-fueled teenage coder who destroys the entire system, and I feel like that's about to play out on a massive scale

04.02.2025 16:09 πŸ‘ 7 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

This still makes sense canonically given this version of Lorca was from the evil timeline.

Every car is a Cybertruck in the mirror universe

29.01.2025 14:23 πŸ‘ 12 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I agree with your last point wholeheartedly! My point is that Deepseek is a vindication of LLMs as a technology, not a threat or a sign of failure.

27.01.2025 23:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Proportional*

27.01.2025 22:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yes, it is unfair given the advances are promotional to the investment. Deepseek makes that investment more rational, not less.

27.01.2025 22:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

Fair enough. I’d say given the advances in AI over the last 3 years anything short of ASI is apparently going to be defined as β€œnot enough progress”!

27.01.2025 22:48 πŸ‘ 1 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

β€œLittle to no progress”!!

27.01.2025 22:43 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

An extremely significant development in AI efficiency = overhyped?

That doesn't make sense to me.

27.01.2025 20:35 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I can see a market for this for some exasperated CISO who fields so many dumb cybersecurity questions that he says FINE WE’RE STORING ALL OUR DATA ON THE MOON

25.01.2025 13:02 πŸ‘ 15 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

For a large swathe of the population "OpenAI" and "AI" are the same, unfortunately.

24.01.2025 13:49 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

T630

23.01.2025 19:05 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

llama-cpp-python

23.01.2025 19:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

For us they are a useful local test proxy for our production cloud environment, which uses L4s. They still have their uses.

23.01.2025 18:50 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Yep. This is not the build for tokens per second. Our use case for this is batch processing - it's not fast enough for real-time chat on larger models.

22.01.2025 20:47 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

For us, it was buying a very old Dell PowerEdge server for Β£100 and putting 4 Tesla P40s in it.

Very slow, but it has 96GB VRAM and runs 70B models comfortably.

22.01.2025 20:35 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0