Donβt use agents/prompting frameworks
They make it *slightly* easier to get started, but much harder as soon as you try to do anything custom.
The best agents framework is Python.
Donβt use agents/prompting frameworks
They make it *slightly* easier to get started, but much harder as soon as you try to do anything custom.
The best agents framework is Python.
Whatβs the tie youβre breaking? Two tokens with exactly equal probabilities?
Wdym by tie breaking?
Anyone understand why OpenAI models arenβt deterministic when temp=0?
If you REALLY want to do something like, tell the model to write a Python script that creates a random password that fits the requirements, then copy/paste the result into a terminal.
Or better yet just use any password manager cuz they can all do this π
The trouble is LLMs are not good at being random. Even the parts that look good might seem random, but if an attacker followed the same procedure you did, the chances they would reproduce the exact same password would be MUCH higher than random.
Even if your LLM chat history is totally private, the generated password will not be truly random. It might even be regurgitating publicly known passwords it saw during training.
Someone at work: βLook, you can copy/paste the password requirements into a local LLM to generate a compliant password!β
PSA: Donβt do this.