Alejandro's Avatar

Alejandro

@acompa

ML / AI at Not Diamond

82
Followers
87
Following
18
Posts
13.04.2023
Joined
Posts Following

Latest posts by Alejandro @acompa

Preview
Not Diamond open roles | Notion Not Diamond is a multi-model AI infrastructure platform used by Fortune 100s and leading startups, backed by folks like Jeff Dean, Julien Chaumond, and Ion Stoica. We are a small, elite team over-inde...

- prompt adaptation capabilities with outstanding results for a Fortune 100 company

We’re now looking to fill three technical roles for building more and supporting our partners. If you want to build impactful #genai products see the link for more details.

notdiamond.notion.site/Not-Diamond-...

22.02.2025 18:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

About to hit Year 1 building multi-model #generativeai on Not Diamond’s founding team, and we’re growing our technical staff during Year 2.

We’ve shipped

- a world-class router with SOTA performance,

- open-source, client-side fallback and reliability tools for scale-ups to larger enterprises,

22.02.2025 18:53 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

AGI is ruined!!

21.12.2024 23:29 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

But β€œmeaningful [to the market]? when the market that has yet to see that threshold” assumes the question.

If you believe the market doesn’t want o1, and ask me to demonstrate otherwise, then I don’t have a shot at convincing you. Even if I point to multiple quarters of Meta’s earning calls, right?

21.12.2024 23:10 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yeah that’s fair. At a minimum we’re seeing strong adoption (ranging from prototypes to production) across customer service contexts, data annotation / summarization, software development, and operational process automation.

21.12.2024 23:02 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Well clearly we’ve reached AGI here

21.12.2024 22:07 πŸ‘ 1 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

πŸ˜‚

21.12.2024 19:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The spirit of Old Data Twitter is alive and kicking here πŸ₯²

21.12.2024 17:24 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
OpenAI is lying about o-1’s Medical Diagnostic Capabilities Uncovering critical issues with the model + suggestions on how to improve it for medical diagnosis

Strongly agreed. I’ve seen some embarrassing (at best!) medical failures from o1. machine-learning-made-simple.medium.com/openai-is-ly...

21.12.2024 17:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

(I’m arguing from a perspective where (1) AGI claims are unrealistic, (2) most AI marketing deserves skepticism, and yet (3) we can still develop meaningful apps / workflows around these models.)

21.12.2024 17:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

And you’re concluding this because of

> the researchers' hypothesis that LLMs look for patterns in reasoning problems, rather than innately understand the concept

right?

This is absolutely a failure from an AGI perspective. But could it be useful to identify generalized reasoning patterns?

21.12.2024 17:17 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I get it, my industry absolutely has a terrible track record with product hype. I personally hate it.

But the people I know engaging in this work *aren’t* the OAIs of the world. They’re uni lab startups quietly working with hospitals and researchers.

21.12.2024 17:12 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

Hugs on your pops. That’s fucking terrible.

21.12.2024 17:06 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

People are! And they’re fine-tuning models atop of Meta’s Llama etc to work with clinical notes and scans! But that’s way less exciting to talk about than OpenAI palace intrigue.

21.12.2024 17:05 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

(From www.wheresyoured.at/subprimeai/ for context for others)

21.12.2024 16:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I’ll share one:

> β€œa big, stupid magic trick” in the form of OpenAI's (rushed) launch of its "o1 (codenamed: strawberry") model

You quoted yourself re: β€œa big, stupid magic trick.” So: why does o1 qualify as one?

21.12.2024 16:15 πŸ‘ 2 πŸ” 0 πŸ’¬ 3 πŸ“Œ 0

No but seriously lol

21.12.2024 15:44 πŸ‘ 9 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

IMO it depends on your goal as a media professional. β€œLLMs got my answers wrong so they’re bad” is both factually correct and superficial. You can certainly run that, or you can explore _why_ they’re wrong in order to enrich your findings.

21.12.2024 15:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image
12.04.2023 01:01 πŸ‘ 7 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0