Home New Trending Search
About Privacy Terms
#
#selfsupervised
Posts tagged #selfsupervised on Bluesky
Self‑Supervised Deviation Learning Enhances Spatio‑Temporal Forecasting

Self‑Supervised Deviation Learning Enhances Spatio‑Temporal Forecasting

ST‑SSDL, a self‑supervised model that anchors inputs to historical averages, beats baselines on spatio‑temporal datasets; arXiv:2510.04908 paper was accepted for NeurIPS 2025. getnews.me/self-supervised-deviatio... #selfsupervised #forecasting

0 0 0 0
Unsupervised Transformer Pre‑Training for Images: DINOv2 Survey

Unsupervised Transformer Pre‑Training for Images: DINOv2 Survey

DINOv2 beats weakly supervised models like OpenCLIP on vision benchmarks using multi‑crop augmentation and a mean‑teacher self‑distillation setup (Oct 2025). Read more: getnews.me/unsupervised-transformer... #dinov2 #selfsupervised

0 0 0 0
Equivariant Splitting for Self‑Supervised Learning on Incomplete Data

Equivariant Splitting for Self‑Supervised Learning on Incomplete Data

Equivariant split loss enables self‑supervised models to reach supervised performance on incomplete observations, state‑of‑the‑art results on image inpainting and MRI. getnews.me/equivariant-splitting-fo... #selfsupervised #equivariantsplit

0 0 0 0
Co-rewarding: Self‑Supervised RL Improves Reasoning in LLMs

Co-rewarding: Self‑Supervised RL Improves Reasoning in LLMs

Researchers introduced Co‑rewarding, a self‑supervised RL method that boosts LLM math reasoning, delivering an average +3.31% gain and a 94.01% Pass@1 score on GSM8K with Qwen‑3‑8B‑Base. getnews.me/co-rewarding-self-superv... #corewarding #selfsupervised #llm

0 0 0 0
Non‑Contrastive Self‑Supervised Learning Boosts Network Intrusion Detection

Non‑Contrastive Self‑Supervised Learning Boosts Network Intrusion Detection

Five non‑contrastive methods across three encoders and six augmentations (90 experiments) on UNSW‑NB15 and 5G‑NIDD datasets beat DeepSVDD and autoencoder baselines. getnews.me/non-contrastive-self-sup... #selfsupervised #intrusiondetection

0 0 0 0
Self‑Supervised Learning Linked to Mutual Information Theory

Self‑Supervised Learning Linked to Mutual Information Theory

Study reframes self‑supervised representation learning as mutual information maximization, proposing two training paradigms—SDMI and JMI in a paper submitted October 2025. Read more: getnews.me/self-supervised-learning... #selfsupervised #ml

0 0 0 0
Structure-Aware Self-Supervised Learning Boosts Text-Attributed Graphs

Structure-Aware Self-Supervised Learning Boosts Text-Attributed Graphs

SSTAG combines LLM‑to‑MLP and GNN‑to‑MLP distillation and uses an in‑memory repository of graph anchors for cross‑domain transfer. Read more: getnews.me/structure-aware-self-sup... #graphlearning #selfsupervised #nlp

1 0 0 0
Equivariant splitting advances learning from incomplete data

Equivariant splitting advances learning from incomplete data

New self‑supervised splitting loss with equivariance achieves state‑of‑the‑art results on image inpainting, accelerated MRI and compressive sensing, without ground‑truth data. Read more: getnews.me/equivariant-splitting-ad... #selfsupervised #imaging

0 0 0 0
SCNet Reduces Spectral Bias in Self‑Supervised Image Denoising

SCNet Reduces Spectral Bias in Self‑Supervised Image Denoising

SCNet adds Lipschitz‑constrained kernels and a spectral‑separation module to self‑supervised denoising, and tests on synthetic and real‑world images show it outperforms existing methods. getnews.me/scnet-reduces-spectral-b... #selfsupervised #denoising

0 0 0 0
Efficient Self‑Supervised Adaptation Boosts Medical Image Analysis

Efficient Self‑Supervised Adaptation Boosts Medical Image Analysis

ESSA reduces GPU memory consumption by up to 40.1% and raises training throughput by 25.2% while keeping inference speed unchanged. The method will be presented at ICCV CVAMD 2025. getnews.me/efficient-self-supervise... #essaa #selfsupervised

0 0 0 0
REALIGN boosts self‑supervised learning from instructional videos

REALIGN boosts self‑supervised learning from instructional videos

REALIGN improves self‑supervised learning from instructional videos, delivering up to 18.9% higher average F1‑score and over 30% gain in temporal IoU on EgoProceL and CrossTask. Read more: getnews.me/realign-boosts-self-supe... #realign #selfsupervised

0 0 0 0
CCNeXt advances self‑supervised stereo depth estimation

CCNeXt advances self‑supervised stereo depth estimation

CCNeXt delivers stereo depth estimation 10.18× faster than the prior top model while matching KITTI Eigen split metrics; the code is open on GitHub. Read more: getnews.me/ccnext-advances-self-sup... #ccnext #stereodepth #selfsupervised

0 0 0 0
Self‑Supervised Multi‑View Crowd Counting Achieves State‑of‑the‑Art Accuracy

Self‑Supervised Multi‑View Crowd Counting Achieves State‑of‑the‑Art Accuracy

SSLCounter achieves top accuracy in multi‑view crowd counting while using only 70% of the usual training data. The paper was submitted on 26 Sep 2025. Read more: getnews.me/self-supervised-multi-vi... #selfsupervised #crowdcounting

0 0 0 0
Contrastive Info Learning Improves Representations Without Augmentation

Contrastive Info Learning Improves Representations Without Augmentation

cMIM, a contrastive Mutual Information Machine, removes the need for positive‑pair augmentations and lowers batch‑size sensitivity. The preprint was submitted on 25 Sep 2025. Read more: getnews.me/contrastive-info-learnin... #cMIM #selfsupervised

0 0 0 0
Adversarial Robustness of Discriminative Self‑Supervised Vision Models

Adversarial Robustness of Discriminative Self‑Supervised Vision Models

Seven SSL models were tested on ImageNet; they outperformed a supervised baseline under adversarial attacks in linear‑eval, but fine‑tuning narrows it. Read more: getnews.me/adversarial-robustness-o... #selfsupervised #adversarial #visionmodels

0 0 0 0
SpellerSSL Boosts EEG Speller Performance with Self‑Supervised Learning

SpellerSSL Boosts EEG Speller Performance with Self‑Supervised Learning

SpellerSSL lifts EEG‑speller accuracy to 94% with just seven stimulus repetitions, cutting calibration trials by about 60%. Read more: getnews.me/spellerssl-boosts-eeg-sp... #eeg #bcis #selfsupervised

0 0 0 0
Self‑Supervised Symbol‑Temporal Learning Boosts Time‑Series AI

Self‑Supervised Symbol‑Temporal Learning Boosts Time‑Series AI

Self‑supervised learning merging symbol‑temporal consistency with contrastive methods boosts robustness; submitted 24 Sep 2025, it outperforms approaches on health benchmarks. Read more: getnews.me/self-supervised-symbol-t... #selfsupervised #healthai

0 0 0 0
Pose‑Free 3D Gaussian Splatting Works from Sparse Views

Pose‑Free 3D Gaussian Splatting Works from Sparse Views

SPFSplatV2 generates high‑quality 3D Gaussian splats from a handful of unposed images, removing the need for camera poses and achieving state‑of‑the‑art novel view synthesis results. Read more: getnews.me/pose-free-3d-gaussian-sp... #gaussian #selfsupervised

0 0 0 0
New Self‑Supervised Method Improves Brain Disorder Diagnosis

New Self‑Supervised Method Improves Brain Disorder Diagnosis

A self‑supervised model builds fMRI connectivity maps, pre‑trained on large unlabeled data and fine‑tuned with few labels, boosting diagnostic accuracy for disorders like Alzheimer’s. getnews.me/new-self-supervised-meth... #brainconnectivity #selfsupervised

0 0 0 0
Pre‑Training Boosts 3D Medical Object Detection, Study Finds

Pre‑Training Boosts 3D Medical Object Detection, Study Finds

A systematic study found self‑supervised reconstruction pre‑training improves 3D medical object detection accuracy over supervised pre‑training; contrastive methods gave no clear benefit. getnews.me/pre-training-boosts-3d-m... #3dmedimaging #selfsupervised

0 0 0 0
Self‑Supervised Depth Estimation Improves Object Boundary Sharpness

Self‑Supervised Depth Estimation Improves Object Boundary Sharpness

A new self‑supervised monocular depth method models per‑pixel depth as a mixture distribution, improving object‑boundary sharpness by up to 35% on KITTI and VKITTIv2. getnews.me/self-supervised-depth-es... #monoculardepth #selfsupervised

0 0 0 0
CrossI2P Advances Self‑Supervised Image‑to‑Point Cloud Registration

CrossI2P Advances Self‑Supervised Image‑to‑Point Cloud Registration

Researchers unveiled CrossI2P, a self‑supervised image‑to‑point‑cloud registration system that boosts accuracy by 23.7% on KITTI and 37.9% on nuScenes. getnews.me/crossi2p-advances-self-s... #crossi2p #selfsupervised

0 0 0 0
Foveated Vision Enhances Self‑Supervised Object Learning in New Study

Foveated Vision Enhances Self‑Supervised Object Learning in New Study

Simulated foveated vision improves self‑supervised object learning on egocentric video, the study reports. Published Sep 2025, accepted at IEEE ICDL 2025. Read more: getnews.me/foveated-vision-enhances... #foveatedvision #selfsupervised #icdl

0 0 0 0
Self‑Supervised Vision Transformers: Raw Feature Performance

Self‑Supervised Vision Transformers: Raw Feature Performance

Contrastive‑pretrained Vision Transformers achieve top classification accuracy with query tokens, while MIM‑pretrained models excel in segmentation using features and a classifier. getnews.me/self-supervised-vision-t... #selfsupervised #visiontransformers

0 0 0 0
Vision Transformers Demonstrate Human‑Like Mental Rotation Abilities

Vision Transformers Demonstrate Human‑Like Mental Rotation Abilities

A study submitted on 18 September 2025 shows self‑supervised Vision Transformers outperform supervised models in mental‑rotation tests. Read more: getnews.me/vision-transformers-demo... #visiontransformers #selfsupervised #mentalrotation

0 0 0 0
DeCoP Boosts Time-Series AI via Dependency Pre-Training

DeCoP Boosts Time-Series AI via Dependency Pre-Training

DeCoP, a new self‑supervised time‑series framework, improves mean squared error by 3 % on ETTh1 versus PatchTST and cuts floating‑point ops to 37 % of the original. Read more: getnews.me/decop-boosts-time-series... #timeseries #selfsupervised #ai

0 0 0 0
Weakly & Self‑Supervised Motion Prediction for Autonomous Driving

Weakly & Self‑Supervised Motion Prediction for Autonomous Driving

A new weakly and self‑supervised LiDAR motion prediction method needs only 0.1 % (or 0.01 % with ground masks) of annotations and matches fully supervised accuracy. getnews.me/weakly-self-supervised-m... #motionprediction #selfsupervised

0 0 0 0
SiamSA-PPM: Data Augmentation Boosts Predictive Process Monitoring

SiamSA-PPM: Data Augmentation Boosts Predictive Process Monitoring

SiamSA‑PPM combines Siamese learning with three trace augmentation methods, achieving performance that matches or exceeds state‑of‑the‑art on public event logs. getnews.me/siamsa-ppm-data-augmenta... #predictivemonitoring #selfsupervised

0 0 0 0
LayerLock Uses Progressive Freezing to Avoid Representation Collapse

LayerLock Uses Progressive Freezing to Avoid Representation Collapse

LayerLock uses progressive freezing for Vision Transformers, enabling models up to 4 billion parameters to train efficiently and avoid collapse. Submitted to ICCV 2025 on 12 Sept 2025. getnews.me/layerlock-uses-progressi... #layerlock #selfsupervised

0 0 0 0
Preview
Meta DINOv3: Advanced Self-Supervised Vision Model Explore DINOv3: Meta's cutting-edge vision model. Achieve high-precision visual analysis with this innovative technology.

MetaverseTrendsHub News!
Revolutionizing vision! Meta's DINOv3 delivers high-precision image analysis. Explore this cutting-edge self-supervised model. #DINOv3 #ComputerVision #SelfSupervised

Click here↓↓↓
metaversetrendshub.com/2025/08/16/d...

1 0 0 0