PowerEdge: Data Retention Occur with SSD or Nvme Drives Due to Prolonged Power off | Dell Maldives
“To prevent data retention concerns, Dell Technologies recommends that based on current technology, SSD/NVMe drives which contain user data are powered-up once every 2.5 months for a minimum duration of 3 weeks to allow background tasks to complete.” www.dell.com/support/kbdo...
09.02.2026 00:57
👍 1
🔁 0
💬 0
📌 0
I recently fought through permissions heck on AWS and came up with a solution I think I believe in. Stacks of hard drives are my fallback position. But I only recently read that you’re supposed to power SSDs on periodically to avoid data loss.
09.02.2026 00:57
👍 1
🔁 0
💬 1
📌 0
Which gave me an even more sympathy for security folks worried about what people might do and the risks of social engineering.
06.02.2026 12:58
👍 1
🔁 0
💬 0
📌 0
I locked @openclaw-x.bsky.social in a sandbox and watched it skitter around, with Opus 4.5 and with gpt-oss:120b. The security doomers are spot on. But the threat model is similar in kind (but worse) that just giving a random person access to equivalent permissions needed to perform a given task.
06.02.2026 12:56
👍 0
🔁 0
💬 1
📌 0
The skill markdown file for a Claude skill must be static when packaged. But, nothing seems to prevent it from being a stub that then calls a method within the skill to populate dynamic content. My question is how many boilerplate tokens (aside from description) does a skill burn just by existing?
16.12.2025 14:13
👍 2
🔁 0
💬 0
📌 0
The skill markdown file for a Claude skill must be static when packaged. But, nothing seems to prevent it from being a stub that then calls a method within the skill to populate dynamic content. My question is how many boilerplate tokens (aside from description) does a skill burn just by existing?
16.12.2025 14:11
👍 0
🔁 0
💬 0
📌 0
I guess it is a blessing that the AIC/BIC make it really hard to misuse them in this way?
13.12.2025 14:15
👍 0
🔁 0
💬 1
📌 0
Exactly so. There are many measures that make sense for looking at a model across datasets. R² isn't that as it describes the model itself on its dataset.
For alternate datasets there are many other options
Or in the spirit of R² being in 'plain' units rather than variance units, RMSE.
13.12.2025 14:15
👍 0
🔁 0
💬 1
📌 0
Journal of Agricultural Research
Depends on how you define 'traditional' and the tradition you come from I suppose. books.google.co.ao/books?id=lNN... would generate power from the grave I think.
13.12.2025 14:06
👍 0
🔁 0
💬 1
📌 0
Yikes. Is there a list of 'sklearn doesn't do what anybody else did' choices? I ran into the 'everything is secretly horrible' with L2 regularization being the default for logistic regression. So, I'm wary, but this one took the cake as seeming like more than just being 'a choice'.
13.12.2025 05:44
👍 1
🔁 0
💬 0
📌 0
So much has gone wrong at that point though. R² was historically originally a measure of model fit within its train set. Arguably it doesn't belong, nor is it sensible, to use for validation or test data.
13.12.2025 05:39
👍 0
🔁 0
💬 0
📌 0
If the predictions have a negative relationship with the observed values it can drop below negative 1. Deeply cursed at a conceptual level.
13.12.2025 05:39
👍 0
🔁 0
💬 1
📌 0
I don't find it credible that R² should be negative anymore than the square of a non-imaginary number can be negative.
In the sklearn case, they wrote it to be sensitive to the mean... if the prediction is worse than the mean of the observations would have been, it goes negative.
13.12.2025 05:39
👍 0
🔁 0
💬 1
📌 0
Nevermind that question alone shows in other ways that sklearn is unhinged.
13.12.2025 05:20
👍 2
🔁 0
💬 1
📌 0
Train
13.12.2025 05:14
👍 1
🔁 0
💬 2
📌 0
Why in the list underneath the article is it listed at 11? I want to believe. Scikit gave an ML colleague a negative R² value today. Not adjusted R². I'm tired. 🙃
13.12.2025 02:47
👍 1
🔁 0
💬 2
📌 0
Sklearn's regression metrics function r2_score can yield negative values. No, it isn't yielding an adjusted r^2. Cursed.
12.12.2025 21:47
👍 0
🔁 0
💬 1
📌 0
Do other parents enjoy helping their kids out on studying for spelling tests? How do you do things so that it is more enjoyable?
21.11.2025 16:04
👍 0
🔁 0
💬 0
📌 0
Is the drum in alexdeng.github.io/public/files... or is there another resource you'd recommend (until your paper comes out)?
25.10.2025 14:10
👍 3
🔁 0
💬 1
📌 0
As a _very_ minor point at typical sample sizes, the ANCOVA will certainly penalize you for the two degrees of freedom that you're otherwise stepping out of the model (slight change in error terms). But, I'm looking forward to seeing the paper he is putting together to explain the rest!
25.10.2025 14:02
👍 0
🔁 0
💬 0
📌 0
# Hold onto the mean of the outcome
outcome_mean <- mean(mtcars$mpg)
# mean center the outcome and predictors
# n.b., you only need to mean center (no need to scale any of the variables)
mean_centered_variables <- scale(mtcars, center=TRUE, scale=FALSE)
model <- lm(mpg~0+am+wt,data=data.frame(mean_centered_variables))
# The adjustment is to just take the unexplained variance from the model
# i.e. residuals and restore their original mean
adj <- outcome_mean + model$residuals
# Original Variable
mean(mtcars$mpg)
#> [1] 20.09062
var(mtcars$mpg)
#> [1] 36.3241
# Adjusted variable
mean(adj)
#> [1] 20.09062
var(adj)
#> [1] 8.978055
# Note 1: We mean centered, of course we didn't need to estimate an intercept
model_w_intercept <- lm(mpg~1+am+wt,data=data.frame(mean_centered_variables))
zero <- 0
names(zero) <- "(Intercept)"
all.equal(0,model_w_intercept$coefficients["(Intercept)"],check.names=FALSE)
#> [1] TRUE
@phdemetri.bsky.social would you mind checking my restatement? cc: @chelseaparlett.bsky.social
25.10.2025 13:55
👍 1
🔁 0
💬 1
📌 0
I have used d2 and mermaid for diagrams with it mostly. True re: GIMP. A web based tool it might have a shot at steering, but it wouldn't be cheap.
16.10.2025 22:50
👍 1
🔁 0
💬 0
📌 0
Does it do any better with the graphs, graphics, and illustrations if you bounce it through writing code to produce them? Maybe giving it hints about technology that tends to produce more appealing outputs? Or it is just at a loss?
16.10.2025 21:51
👍 2
🔁 0
💬 1
📌 0
🤦♂️Give me the power to give myself the grace I aspire to give to others.
16.10.2025 20:10
👍 0
🔁 0
💬 0
📌 0
It is f'ing awesome to realize that something you'd been shooting intuitively around for years but couldn't get a handle on has a fully realized version. It is a little bittersweet too. Moreso when you see that the realized version has been around for decades.
16.10.2025 20:10
👍 0
🔁 0
💬 1
📌 0
It's a little less insane than it looks on the surface if you consider that someone needs to be able to wake a screensaver from a blueooth mouse or keyboard. But... yikes... it wasn't one of the first 10 error messages I tried to run to ground to solve the problem.
10.10.2025 17:54
👍 0
🔁 0
💬 0
📌 0
You can't make this stuff up. Gnome was dying on a new linux machine because... *checks notes* the screensaver wouldn't start because a bluetooth library wasn't available.
10.10.2025 17:54
👍 0
🔁 0
💬 1
📌 0
I've spent some of today using local models to squish down verbose webpages to the things I actually care about for doing additional work. Then I notice Qwen code does "Fetching content from {url} and processing with prompt: {prompt}"... makes me feel seen and validated.
03.10.2025 03:23
👍 0
🔁 0
💬 0
📌 0
I kind of love using a model from a different family to vet what the first model said. It has seemed to work better than reprompting the initial model with a fresh context window.
02.10.2025 15:12
👍 3
🔁 0
💬 0
📌 0
Now right an LLM as a judge to evaluate the prompt and evaluate the evaluation...
02.10.2025 14:22
👍 0
🔁 0
💬 1
📌 0