Claude Artifact
Try out Artifacts created by Claude users
1/6 To assess performance at the individual level, at the end of every project you will ask managers NOT about the skills of each team member but about their own future actions with respect to that person! Here is a quick clickable example: bit.ly/3NGXvB1
Full article: bit.ly/3UMv0Ws
13.11.2024 19:09
π 0
π 0
π¬ 0
π 0
1/5 One approach to overcome this problem is to use - standardized, systematic project-based assessment...
13.11.2024 19:06
π 0
π 0
π¬ 1
π 0
1/4 Scullen et al. (2000) found that 62% of rating differences came from the evaluatorβs own quirks and preferences, and only 21% was based on real performance! (grinning troll face)
13.11.2024 19:05
π 0
π 0
π¬ 1
π 0
1/3 These factors tend to colour their (our) judgment more than the employee's actual work!
13.11.2024 19:05
π 0
π 0
π¬ 1
π 0
1/2 When managers rate employees, their assessments often reflect (1) their own experience, (2) personal values, and (3) whatever data they can recall - usually just recent events (hello, recency bias) instead of systematically collected historical data points.
13.11.2024 19:04
π 0
π 0
π¬ 1
π 0
1/1 Performance management is due for an overhaul!
A lot of companies still use the old A/B/C type of performance management system, but evidence shows these systems often miss the mark on real performance. They tend to say more about the managers than the employees!
13.11.2024 19:03
π 0
π 0
π¬ 1
π 0
Itβs ridiculous that a convicted felon could even have a shot at the presidency. Imo. If these criminal cases get swept under the rug, it could mean the end of democracy in the U.S.
13.11.2024 12:04
π 0
π 0
π¬ 0
π 0
Honestly, a social network is nothing without a critical mass of people, and Bluesky just doesnβt have it. Itβs like throwing a party where no one shows up. Sure, the skyβs blue and all, but without enough people, thereβs no point sticking around.
06.11.2024 11:49
π 1
π 0
π¬ 1
π 0
whatβs the deal with this whole π¦ thing anyway? Why would anyone switch over? Just because Elonβs being a little π? Iβm open to some real reasons if anyoneβs got them, but so far, Iβm not seeing the point.
Bluesky just feels! empty like staring up at the sky and waiting for something to happenβ¦
06.11.2024 11:47
π 1
π 0
π¬ 1
π 0
I bet itβs #1, i had the same exp.
06.11.2024 11:41
π 1
π 0
π¬ 0
π 0
Julian Ustiyanovych on LinkedIn: #gpt4 #gpt4 #llmpsychology #gpt4 #psychometrics
#GPT4, combined with the wisdom of 600 human raters, assessed the personalities of 226 public figures using a collective βwisdom of the crowdβ approach.
Theβ¦
#GPT4, with 600 human raters, assessed 226 public figures' personalities. Results? #GPT4 nailed it with correlations from r=.76 to .87, outperforming models built for this task π€― The kicker? It wasnβt even trained or given feedback!
read more: www.linkedin.com/posts/j16h_g...
26.09.2024 11:23
π 0
π 0
π¬ 0
π 0
To conclude, LLM-based simulations of experiments could offer significant value in areas such as (1) intervention design, (2) minimizing harm to human participants, (3) pilot testing study materials, and (4) predicting subgroup effects, (5) pre-testing product hypothesis, etc.
17.09.2024 06:36
π 1
π 0
π¬ 0
π 0
A recent study titled "Predicting Results of Social Science Experiments Using Large Language Models" by Ashokkumar et al. (2024) found a strong alignment (r = .85) between simulated and observed effects across 70 pre-registered studies, 476 treatment effects, and over 100K participants.
17.09.2024 06:34
π 1
π 0
π¬ 0
π 0
β¦the results show that the LLM crowd outperformed a simple no-information benchmark and is not statistically different from the human crowd.
17.09.2024 06:23
π 1
π 0
π¬ 0
π 0
Another study by Schoenegger et al. (2024) on the "wisdom of the silicon crowd" used an LLM ensemble approach consisting of a crowd of 12 LLMs. They compared the aggregated LLM predictions on 31 binary questions to the predictions of 925 human forecasters from a three-month forecasting tournament...
17.09.2024 06:22
π 1
π 0
π¬ 0
π 0
For example, last yearβs study, βCan AI language models replace human participantsβ by Dillion et al. (2023), focuses on moral psychology and suggests that GPT-3.5 (text-davinci-003) generates judgments about a variety of moral scenarios that strongly correlate with average human judgements.
17.09.2024 06:17
π 1
π 0
π¬ 0
π 0
Similarly, other studies suggest that #SyntheticUsers mean values tend to be highly similar to those of their human counterparts.
17.09.2024 06:15
π 1
π 0
π¬ 0
π 0
Multiple shreds of evidence suggest that #LLMs are pretty effective at providing answers to questions that closely reflect those collected from real humans, hence effectively simulating human answers, behaviours, and psychological traits.
17.09.2024 06:04
π 1
π 0
π¬ 0
π 0
With #o1 out, weβve got System 2 (slow thinking) alongside System 1 (fast thinking). #SyntheticUsers will now better mimic human behavior, showing more human-like cognitive patterns and moving beyond simple reactions to more thoughtful, context-aware actions.
17.09.2024 06:02
π 1
π 0
π¬ 0
π 0
#SyntheticUsers, who are they?
According to Frontline BeSci (2024), a βsynthetic user,β or the more widely known academic term βsynthetic respondent,β is an artificially created user profile powered by LLMs. It simulates the behavioral and psychological characteristics of a real human.
17.09.2024 06:00
π 2
π 0
π¬ 8
π 0
Indeed!
11.11.2023 09:54
π 0
π 0
π¬ 0
π 0
Anyone familiar with Ford 2000? Looks great, huh? :)
27.08.2023 19:44
π 0
π 0
π¬ 0
π 0
Why coffee in the US sucks so badly? :) or Iβm wrong? :)
27.08.2023 19:31
π 1
π 0
π¬ 1
π 0
hello blue :)
22.08.2023 23:01
π 7
π 0
π¬ 0
π 0