You had it work directly with the method files? I suppose it should be able to but I've been paranoid about accidental bricking.
You had it work directly with the method files? I suppose it should be able to but I've been paranoid about accidental bricking.
Not to be an Anthropic fanbor but Claude in the substantial lead with respect to recognizing nonsense github.com/petergpt/bul...
Oops I misnamed that part, actually outer mz's are trimmed (mz's outside of cutoff percentiles are excluded), not winsorized (mz's outside of percentiles are replaced with cutoff value)
Basically takes all precursor rt-mz's, deduplicates and winsorizes, and then breaks them down into evenly sized quantiles based on the number of desired windows in mz (variable) and rt (segments). The boundaries are optimized for z=2&3. By default staggering which helps with MS2 deconvolution.
Want better DIA #proteomics coverage at the same scan rate with almost zero effort? Get the variable and/or RT-segmented window mz's by uploading your precursor rt-mz table to this free & easy webapp! unitsaq.shinyapps.io/diavariablew...
Idea to webapp in 2 days with #ClaudeCode!
Haha true re: vendor poster but seems others have reproduced decent results at least down to ~5 min www.nature.com/articles/s41... 2 min runs seems like pushing the boundary unnecessarily unless you had to run thousands of samples...
4 min grad (300 SPD) doesn't look too bad either www.biorxiv.org/content/10.1...
Thermo Poster: Not 2 min but 8 min showed no apparent decline in quant accuracy vs 24 min.
If 15 min is about 6 sec peaks, 2 min grad ~1 sec wide lol--can't imagine that's good but maybe the power of averaging multiple peptides per protein partially rescues the pep level degradation
Yes, but it's agentic engineering now
Yes though my feeling is that pdf vectorized graphics are processed as code so probably better to render as png for image recognition.
The unlock with CC is the iterative self-regulating feedback loop. It does much better when it can run and check its own performance and self adjust rather than flying blind as with pure LLM code generation.
CC runs R itself from CLI
terminal
cd to project folder
claude
/plan mode
Please write an R script to extract and plot MS1 EICs for the peptides in the .csv from .raw files. Use to /path/to/R and msconvert. Please test in small chunks and review the results/plots before progressing and scaling
Still misses so needs careful guidance but less so if the scope is narrow and the task objective is well-defined.
Make sure to /plan well (agentic analogue to prompting), have it output relevant metrics and PNG visualizations for feedback, and to keep track of % context fill. Eats tokens like a firehouse so subscription is probably necessary to minimize costs.
Imo the workflow for CC is a bit different eg initialize it in a project folder with starting inputs and it writes, runs, and troubleshoots the analysis itself iteratively. Once it's done to your satisfaction you can take the handoff code. More like an assistant developer. No copy pasting needed.
A mini success story: Cropping RT eg 1-60 vs 0-60 min for DIA mzML breaks DIA-NN. It was a re-indexing issue which is simple but would take me several manual iterations. With CC in its own local write-run-check iteration loop, it solved overnight with a one-shot prompt. Needs tokens though.
Still learning/playing but seems like at least with CC there's a step increase in "intelligence". Importantly having CC write and run the code locally, review the outputs and self-correct removes a ton of friction. Still needs careful high level planning and design direction though.
Have you tested antigravity vs claudecode? From my use in the last week claudecode with opus 4.5 is way more capable than in the past--able to one-shot simple data processing ideas consistently but complex ideas are hit or miss especially when the context fills up.
I think just adjusting the specificity of the language to match the scope of what they are describing would suffice, eg iTDP refers to a range of approaches, whereas 2DGE-TDP or analogous refers to specific approaches. If this was the original intention then the wiki language does not convey it.
Passerby's perspective: seems iTDP is used specifically to refer to 2DGE but semantically could refer to all combinations of techniques eg 1/2/n-D GE/LC + native/denatured/digested MS. Appreciate increased article depth, but visceral reaction is likely bc of specific use of a nonspecific new term.
The obvious answer here is a high frequency scan-polarity-aligned mechanical source repositioning device /s
Isn't that pretty much AmAc + AA, how well does it do for the cholesterol like ionizers? I use the same source position since I didn't see anything dramatic when I was fiddling.
Has anyone tried 10 mM AmAc:AmFo (~7:3 molar) as the additive for #LC-MS #lipidomics?
Based on Cajka et al, seems like it would be a good compromise for fatty acids vs hydroxysterols to enable consistent RTs between polarities and fast polarity switching doi.org/10.3390/ijms...
Awesome #massspec teaching video for ion trapping--tangible/visible trapping in action: youtu.be link
Digital torque wrench with a slotted Allen wrench is the way to go. Reproducible tightening across users and different tightness levels for different connections. Leak check with pressure holding with a plugged output (closed system) eg 700 bar
I hope they miniaturize this into a benchtop version like the Exploris --fast polarity switching would make it a killer.
I tried reagent then mass spec grade DMSO added to A and B solvents. Noticed S-containing degradation products so tried frozen aliquots. Still found faster than expected decrease in sensitivity across runs. The only thing that brought back performance was Quad cleaning ๐ข I do micro flow though...
Was never able to get DMSO working long term--dirtying the MS internals wasn't worth the increased pep sensitivity. How do others get it to work??
Generally agree about mRNA != Protein but the liver example is a bit unfair as it's the primary tissue supplying plasma, i.e. if the protein representation was liver + plasma this figure would have much higher congruence.
Run time determines instrument cost per sample?
Agreed at the individual peak level. For the sake of argument, what about n=100 vs 100 compared to 4x slower 50 SPD at n=25 vs 25? The latter is barely enough stat power to get past human variation, and whereas former averages across peptides per protein to make up for low points across peak.