Are you seriously expecting me to believe that the claims made on LinkedIn are made up, to chase engagement numbers that don't matter anyway?
Next you'll be telling me that OpenAI and Anthropic are paying people to post on LinkedIn.
Are you seriously expecting me to believe that the claims made on LinkedIn are made up, to chase engagement numbers that don't matter anyway?
Next you'll be telling me that OpenAI and Anthropic are paying people to post on LinkedIn.
What will things be like in 50 years, when there are no senior developers left to second guess AI?
I guess the answer that I'll get back is that AI will have vastly improved. Except that depends on infinite funding and infinite computational resources.
Oh.
Also, is Indeed the only place that people advertise jobs? I dont think so. Many dont and would not even consider it.
The graph shows an uptick, but why do you call it a surge? Let's compare with Q1 for the last decade.
Also, where is your data that proves that AI is responsible?
Because dictatorships don't need to worry about election cycles.
But let's be clear. AI is not taking anyone's job. Executives that are Big AI sycophants are taking your job.
The endgame? Mass unemployment. Execs want to reduce their salary costs, not realising that salaries are the source of revenue.
This is what the capitalists want. Obedience and compliance. It's what they've always wanted: useful idiots.
Everything is awful. This is not news.
Everyone is so focused on today, they aren't thinking about what tomorrow will look like.
The endgame of AI is to put billions out of work, so that the employers are saved the cost of salaries.
It's fundamentally short-sighted and ignorant of economic reality.
They are treated as tools by the lords of capitalism, sure. That's why you have the obnoxious term of Human Resources.
However my point is that AI and humans are not equitable. But we are at the point now where the economy, which should serve humans, is now the master.
AI is a tool. Tools are generally expected to provide correct outputs.
The output of a tool is not equitable with the output of a human. To claim otherwise is cultish zealotry.
This is capitalism gone wrong, where success MUST mean billion-dollar revenues and ownership of your own company diluted by a bunch of investors.
It always amuses me when solopreneurs use "we" instead of "I". And when one person is referred to as a team.
Don't they know that faceless big business, that only values you for how much money you can spend, is less appealing than the human touch?
Capitalism: all that matters is that you make money. I've somewhat soured on this assumed orthodoxy that pollutes western cultures.
If a product consists of 100% AI-generated code, who owns the copyright on that code?
If the answer is "nobody", what risks does that invite?
If your income relies on technical solutions, many will reject non-technical solutions that are superior. Sad, but true.
Everyone wants to hire seniors. Nobody wants to make them. uxdesign.cc/everyone-wan...
Isn't this what the second amendment was intended for?
Shipping in 1 day is even better ๐
Oh, you can't hyperfocus? Yes, I imagine that is quite the hurdle.
I am no AI advocate, but I dont see how lamenting that a process has changed for a totally different scenario is justified. Happy to be proved wrong, as always.
It all comes down to risk management. Before AI: manual checking, automated tests. The whole point of AI seems to be to move so fast that manual checking is impossible. So I imagine that more effort is needed in test automation and process automation.
Are large batches caused by waterfall, as a rule? I'm not sure about that.
But isnt this based on refactoring "by hand", instead of AI doing it?
Are you planning on turning lead into gold, too?
What is the manner of this enablement?
Isnt this just risk management? If your risk management is bumped elsewhere (really good automated tests), does the "one atomic change at a time" rule need to remain?