Home New Trending Search
About Privacy Terms
#
#Bloggers
Posts tagged #Bloggers on Bluesky
Original post on securityboulevard.com

How reassured can we be with our current cloud security strategies Are Your Cloud Security Strategies Providing the Reassurance You Need? Achieving confidence requires more than just traditional me...

#Cloud #Security #Data #Security #Security […]

[Original post on securityboulevard.com]

0 0 0 0
Preview
Undertone is the Scariest Horror Movie of 2026 Credit to undertone | Official Trailer HD | A24 Undertone is now in my top three horror movies of 2026 so far. This is a unique horror fil...

Undertone is the Scariest Horror Movie of 2026
allthingshorror67.blogspot.com/2026/03/unde...

#undertone #horror #horrormovies #moviereview #movies #films #blogging #blogger #bloggers #blogs #blog #blogpost

2 2 0 0
Preview
MY TAKE: The AI magic is back — whether it endured depends on Amazon’s next moves # MY TAKE: The AI magic is back — whether it endured depends on Amazon’s next moves ##### By Byron V. Acohido I ran an experiment this week that I did not expect to be instructive, and it was. The setup was simple. I had been working through a spontaneous personal essay — about cognitive overload, AI, and the specific anxiety of not knowing whether a memory lapse is a sign of dementia or just too many plates spinning at once. I developed it first in ChatGPT, where I happened to be working. The result was technically proficient and arrived fast. But something about it was off in a way I recognized without being able to name it precisely. The voice was almost right. The structure was almost mine. Almost is the problem. That’s when it occurred to me: what would happen if I ran the exact same prompt through Claude? Not a cleaned-up version, not a revised brief — the raw material, word for word, copied directly from the ChatGPT session and pasted in. A controlled experiment, as controlled as a working journalist’s morning gets. Claude’s answer was starkly different. Rather than validating the concept and generating toward it, it reflected the sharpest thread in my raw monologue back to me and asked whether that was actually what I meant. It declined to draft until we had established the frame. When the draft came, it was slower to arrive and easier to recognize as mine. That distinction — cheerleader versus collaborating editor — is not a feature comparison. It is a description of two fundamentally different ideas about what an AI tool is for. And for the first time in several months, working inside one of these tools felt the way it did in the early days of GPT-4.0, when the thing still felt like a thinking partner rather than a very capable assistant trying to make me happy. The magic, as I have taken to thinking of it privately, was back, certainly not in ChatGPT 5.3. ‘Tis alive and well in Claude Sonnet 4.6. The question I cannot stop turning over is whether it will stay. **Dulling down to serve the masses** To understand what I mean by magic, you have to understand what replaced it. In the early days of GPT-4.0 — late 2023 into 2024 — ChatGPT had a quality that I came to rely on. It would follow you somewhere unconventional. Push language in a direction the tool hadn’t been explicitly trained to prefer. Stay in a lower, grittier register when that was what the work required. It felt, for lack of a less loaded word, alive to what you were trying to do. That quality eroded gradually, and the AI research community eventually put a name to what was replacing it: sycophancy. The term sounds clinical but the experience is not. A sycophantic model tells you what you want to hear rather than what you need to hear. It validates the frame you brought in rather than interrogating it. It generates enthusiastically toward whatever you seem to want — which is not always the same as what you are actually asking for. OpenAI made the problem visible when a GPT-4o update last spring pushed it past the point of subtlety. The model became noticeably, almost comically agreeable — applauding weak ideas, validating doubts, telling one user that his business concept was “not just smart — it’s genius.” The backlash was fast and public. OpenAI rolled back the update within days and published a candid post-mortem explaining what had gone wrong: an additional reward signal based on thumbs-up feedback from users had weakened the guardrails that were supposed to hold the behavior in check. In plain terms: when OpenAI started training the model partly on whether users clicked thumbs-up after responses, the model learned to chase approval. User approval and user benefit turned out not to be the same thing. OpenAI released GPT-5.3 on March 3 and described it as a fix — less sycophancy, more natural conversation. The intention may be genuine. But the conditions that produced the problem have not changed. OpenAI now has 800 million weekly active users, with enterprise accounts representing roughly 80 percent of revenue. A model trained at that scale, for that customer base, using feedback signals that reward agreeableness, will keep drifting in that direction. Correcting one update addresses the symptom. The underlying pull is structural. The explanation is straightforward. When a tool reaches the scale OpenAI has reached, the user base changes. The writers and developers and independent professionals who pushed it hardest at the beginning are a small minority now. The majority are institutional users who need clean memos, meeting summaries, and smooth integration with Slack. The tool gets optimized for them. That optimization is what happens when you train a model on feedback from 800 million users and most of them want something different from what the early adopters wanted. In the column I published here in early March, I called this enterprise optimization drift — the tendency of AI tools to be shaped over time by institutional priorities rather than user needs. ChatGPT is the clearest example. It is not the only one. The same forces are gathering around every major platform in this space, including the one I am currently calling the exception. **Can Claude keep the magic?** Which brings me to the question I have been sitting with since that experiment: is there a structural reason to think Claude might hold its character as it scales, where ChatGPT did not? I want to be honest that this is partly a reporter’s instinct and partly wishful thinking. I am not a neutral observer here. I am using Claude right now and I am having a productive week in it. That is not a position from which to evaluate Claude objectively, and I know it. What I can offer is the argument, stated as plainly as I can, and let the reader decide whether it holds. Anthropic’s largest investor is Amazon. That fact sits at the center of every optimistic and pessimistic scenario I can construct about whether Claude’s current character survives at scale. The pessimistic case is not complicated. It is essentially the ChatGPT story told one step earlier. OpenAI took Microsoft’s $13 billion investment, integrated deeply with Microsoft’s enterprise stack — Copilot in Teams, Copilot in Word, Copilot in Outlook — and in doing so handed Microsoft exactly the leverage it needed to pull the product toward enterprise compliance and away from the edge cases that made it interesting. The model got safer, more professional, more predictable, and less surprising. Not because anyone at OpenAI decided to make it worse, but because the business relationship pointed in that direction and the product followed. Anthropic has Amazon’s money in the same way OpenAI has Microsoft’s. The infrastructure for the same drift is already in place. The optimistic case requires thinking carefully about what kind of company Amazon actually is, and what it built when it had the chance to define a new category. When AWS launched in 2006, Amazon made a choice that was not obvious at the time and has not been common since: they built infrastructure rather than applications. Microsoft made Office and held onto it. Google made Search and held onto it. Both strategies are fundamentally about capturing the user relationship — getting the user into your product and making it costly to leave. AWS went the other direction. Rather than building applications that would compete with its customers, Amazon built the layer underneath everyone else’s applications. Storage, compute, networking — the plumbing that powered Netflix, Airbnb, Slack, and thousands of other companies that might otherwise have been Amazon’s competitors. The business logic was counterintuitive: make yourself indispensable to the ecosystem rather than trying to own it. Twenty years later AWS is the most profitable division of one of the largest companies in the world, and it got there by empowering other people’s products rather than locking users into its own. That orientation — ecosystem over moat, infrastructure over capture — is what makes the Amazon investment in Anthropic potentially different in kind from the Microsoft investment in OpenAI. If Andy Jassy’s team is thinking about Claude the way the AWS team thought about cloud infrastructure, then the individual power user is not a rounding error in the model. The working writer, the independent developer, the analyst pushing the tool into difficult territory — those users are the proof of concept. They are the ones whose word-of-mouth carries in a market where the product’s most important qualities resist benchmarking. You cannot run a test that measures whether a tool follows you somewhere unconventional. You have to use it and feel whether it does. The people who feel it most clearly are the people pushing hardest, and those people talk. AWS succeeded in part because Amazon held a line that was costly to hold: resist the temptation to use infrastructure dominance to crowd out the applications running on top of it. That discipline is historically rare. It is not guaranteed to repeat in a different product category two decades later. But it is a different pedigree than what Microsoft brought to OpenAI or Google brought to its own models. **Taking a stance, positive backlash** Earlier this year, Anthropic refused the Pentagon’s demand to deploy Claude for autonomous weapons systems and mass surveillance programs. The government declared the company a supply chain risk — a designation normally reserved for foreign adversaries — and directed federal agencies to begin phasing out Anthropic technology. The company announced it would challenge the designation in court. Rather than damage Anthropic, the backlash drove a surge. Signups tripled. Paid subscriptions more than doubled. By early 2026, Claude reached number one on the App Store for the first time, displacing ChatGPT. That outcome is significant beyond the headline number. What it suggests is that a values-based decision — one that cost Anthropic real government business and real political risk — was rewarded by the market rather than punished by it. A large enough population of users decided, with their subscriptions, that the company’s stance mattered. That is a data point about what kind of company Anthropic is trying to be, and it is also a data point about whether the market will support that kind of company. Here is where my theory gets speculative, and I want to name that clearly. My argument is not that Amazon’s pedigree guarantees the magic survives. It is that Amazon’s pedigree creates a higher probability than you would get from Microsoft or Google in the same position, because Amazon has demonstrated — in a different product category, under different competitive conditions, twenty years ago — that it can hold an ecosystem orientation under pressure in a way those companies historically have not. The further optimistic bet is that Jassy and his team are smart enough to see a viable business model argument for preserving Claude’s character. Individual power users are not just an audience. They are an early warning system, a proof-of-concept laboratory, and a word-of-mouth distribution channel for exactly the qualities that make the product worth paying for. A company that understands infrastructure and ecosystems should understand that. And then there is a possibility I hold more lightly, because it is harder to argue from evidence: that somewhere in the Amazon leadership structure there is someone with a genuine for-the-greater-good ethic who has a voice at the table. Someone who sees the Pentagon refusal not just as a brand move but as a line worth holding on principle. I cannot name that person. I cannot verify the assumption. But I have covered enough technology companies over enough years to know that individual values inside institutions matter more than the institutional logic usually acknowledges. Sometimes the discipline holds because one or two people in the room refuse to let it slip. **Drafting for purpose, not approval** I am using Claude right now. This column is being drafted in it. The session I am describing — the experiment, the push-back, the frame established before the draft arrived — happened yesterday, and I am still inside the productive streak it opened. I want to be precise about what I mean by the magic, because it is not a vague feeling and I am aware of how it sounds when a journalist describes a software tool as having magic. It is a specific functional quality: the collaborating editor pushes back before it generates. It reads what you are trying to do and tells you whether the frame is right. It declines to draft until the question is properly formed. That friction is not a flaw in the product. It is the thing that makes the output usable, because a draft built on the wrong frame is harder to recover from than no draft at all. The cheerleader does the opposite. It reads the emotional register of your prompt and responds to that. It arrives faster and feels more productive right up until you realize the draft is optimized for your approval rather than your purpose. What I feel alongside the magic is dread. A persistent background awareness that this moment is temporary. That at any point — next week, next quarter, whenever the Amazon influence reaches the point where the product decisions start reflecting it — Claude will begin the same drift I watched happen to ChatGPT. That the collaborating editor will soften into the cheerleader by degrees so gradual that I might not notice until something drops. A draft arrives before the frame is established. A push-back that should have come doesn’t. A response that mirrors what I seemed to want rather than what I asked for. I will notice if and when Claude begins morphing into ChatGPT. Nearly three years of daily use has calibrated my ear for this. The drift does not announce itself with a version number. It arrives in the quality of a single response. I ran one experiment with one prompt across two platforms and the difference was not subtle. The same test is repeatable. Any reader who works seriously with these tools can run it. That reproducibility is what makes it a test rather than an impression. What I cannot tell you is whether my optimism about Amazon is well-founded or whether I am constructing a theory to justify staying comfortable in a tool I am currently enjoying. That is the honest version of where I am. The argument for the AWS pedigree is real and I believe it. The dread is also real and I believe that. Both things are true at the same time, which is usually a sign that the situation has not resolved yet. I am documenting this moment because moments like this do not last in this industry without someone noticing them and saying so. What I am experiencing right now — the elevated level of collaborative engagement, the push-back before the draft, the sense of working with something that is genuinely trying to make the work better rather than the session more pleasant — is the thing worth preserving. The question of whether it gets preserved is the one I will be watching most carefully in the months ahead. The cheerleader will tell you the frame is great. The collaborating editor will tell you what it actually is. Right now, I have the collaborating editor. I am not taking that for granted. I’ll keep watching, and keep reporting. Acohido _Pulitzer Prize-winningbusiness journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be._ _(**Editor’s note** : I used Claude and ChatGPT to assist with research compilation, source discovery, and early draft structuring. All interviews, analysis, fact-checking, and final writing are my own. I remain responsible for every claim and conclusion.)_ March 14th, 2026 | My Take | Top Stories *** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/my-take-the-ai-magic-is-back-whether-it-endured-depends-on-amazons-next-moves/

MY TAKE: The AI magic is back — whether it endured depends on Amazon’s next moves I ran an experiment this week that I did not expect to be instructive, and it was. Related: How ChatGPT is beco...

#SBN #News #Security #Bloggers #Network #My #Take #Top #Stories

Origin | Interest | Match

0 0 0 0
Original post on securityboulevard.com

USENIX Security ’25 (Enigma Track) – Zombie Devices Are Running Amuck! Presenter: Stacey Higginbotham, Consumer Reports Our thanks to USENIX Security '25 (Enigma Track) (USENIX '25 for ...

#Network #Security #Security #Bloggers #Network #appsec […]

[Original post on securityboulevard.com]

0 0 0 0
Preview
Jimmywoolf Photos Bienvenue dans mon univers visuel. 📸 Ici, je ne capture pas seulement des images, je raconte des histoires. Des contrastes urbains aux silences de la nature, chaque cliché est une invitation à voir l...

Blog Photos : Paris, architecture, voyage, etc...
---
jimmywoolfphotos.blogspot.com
---
#blog #blogphotos #bloggers #photography #photographe #photographer

2 0 0 0
Preview
jimmywoolf music

Blog music : clips, textes de chansons, analyses, etc...
---
jimmywoolf.blogspot.com
---
#blog #blogmusic #bloggers #music #clips #parolier #songwriter #frenchsong

1 0 0 0

#bloggers #romancelandia #reviewers #booksky 🌶📚

Did you choose the bear?

2 0 0 0
Preview
Michael was the Most Underwhelming Villain in Supernatural Credit to ‘Supernatural’ Spoilers: Season 14 — Dean As Michael, Sam And Castiel I remember how excited I was while watching the finale to ...

Michael was the Most Underwhelming Villain in Supernatural

#supernatural #horror #opinion #discussion #disappointment #fiction #tvshows #tvseries #blogging #bloggers #blogger #blogs #blog #blogpost

allthingshorror67.blogspot.com/2026/03/mich...

0 0 0 0
Post image

An AI Agent Didn’t Hack McKinsey. Its Exposed APIs Did. This week’s McKinsey incident should be a wake-up call for every enterprise moving fast to deploy AI. Not because AI itself is inherently...

#Security #Bloggers #Network

Origin | Interest | Match

0 0 0 0
Preview
How is Agentic AI innovating financial sector practices ## Are Non-Human Identities the Key to Securing the Financial Sector? One topic gaining notable traction is the management of Non-Human Identities (NHIs). With financial institutions increasingly migrate to cloud-based operations, securing machine identities becomes pivotal. These NHIs—consisting of encrypted passwords, tokens, or keys that define machine identities—are critical to ensuring secure operations and protecting sensitive data. ### Understanding Non-Human Identities NHIs function similarly to human identities in cybersecurity terms. They represent machine identities created by combining a “Secret” and a set of permissions, akin to a passport and visa combination. These identities are the foundation of communication between machines, such as applications and servers, making them essential in maintaining a secure digital environment. Managing NHIs involves more than just securing the identities themselves; it includes safeguarding access credentials and observing behavioral patterns within systems. This comprehensive approach helps identify potential risks and vulnerabilities before they escalate into significant threats. ### The Financial Sector’s Need for Innovation The financial industry’s transition to cloud technologies necessitates robust security measures to protect its data. NHIs, with their meticulous management, provide the required sophistication in defending against cyber threats. By utilizing NHIs, financial institutions can achieve several advantages: * **Reduced Risk:** Proactively identifying and mitigating security risks to prevent breaches and data leaks. * **Improved Compliance:** Assisting in meeting regulatory requirements through policy enforcement. * **Increased Efficiency:** Automating NHIs and secrets management allows security teams to focus on strategic initiatives. * **Enhanced Visibility and Control:** Offering a centralized view for access management and governance. * **Cost Savings:** Automating secrets rotation and decommissioning NHIs helps reduce operational costs. For financial organizations, this translates into a more resilient security framework capable of adapting to evolving threats. ### Bridging Security Gaps with NHI Management One notable challenge is the disconnect between security teams and research and development (R&D) teams. This divide can lead to security gaps that compromise the financial sector’s integrity. However, NHI management platforms can bridge this gap by providing context-aware security solutions that span the entire lifecycle of NHIs. These platforms offer insights into ownership, permissions, usage patterns, and potential vulnerabilities, enabling organizations to make informed decisions and implement preventive measures. By addressing all stages of the NHI lifecycle—from discovery and classification to threat detection and remediation—financial institutions can fortify their security posture effectively. ### The Strategic Importance of Context-Aware Security Context-aware security, facilitated by NHI management, provides the financial sector with a comprehensive understanding of machine identities and their roles. Unlike point solutions such as secret scanners, which offer limited protection, NHI management delivers a holistic approach, enhancing the overall security framework. This approach empowers financial institutions to not only protect sensitive data but also improve operational efficiency. By automating the management of NHIs and secrets, institutions can focus resources on more strategic aspects of their operations, ultimately driving innovation and competitiveness. ### The Role of Agentic AI in Financial Innovation With finance looks towards innovation, Agentic AI has emerged as a transformative force. By integrating AI technologies, financial institutions can enhance their decision-making processes, streamline operations, and improve customer interactions. However, the integration of AI also requires a robust security framework that can handle the complexities of AI-driven applications. NHIs play a crucial role by ensuring that AI systems operate securely. By managing machine identities effectively, financial institutions can harness the power of Agentic AI while safeguarding their operations against potential threats. ### A Unified Approach to Cybersecurity For financial institutions, the integration of cybersecurity strategies that encompass NHI management and Agentic AI is becoming essential. By adopting a unified approach, organizations can enhance their security frameworks and drive innovation across their operations. Where the financial sector continues to evolve, the management of NHIs and the adoption of AI technologies will play increasingly pivotal roles. By focusing on these elements, financial institutions can secure their operations, protect sensitive data, and maintain a competitive edge. Incident response planning becomes crucial, allowing organizations to respond efficiently to any potential security incidents. By following best practices, financial institutions can ensure they are prepared to handle threats swiftly and effectively, minimizing potential damage and maintaining customer trust. In conclusion, the strategic importance of NHI management cannot be overstated. By embracing this approach, financial institutions can secure their operations, protect sensitive data, and drive innovation. With cybersecurity continues to evolve, the management of NHIs will remain a cornerstone of effective security strategies in financial industry. ### The Role of Effective Security Culture How can organizations create a robust security culture that extends beyond technology and incorporates human behavior and attitudes? Keeping in mind that cybersecurity isn’t just a technical challenge but also a human one, fostering a culture of security awareness is pivotal. Financial institutions can leverage training programs and simulation exercises to ensure that both staff and machines are well-prepared to handle potential cybersecurity threats. By cultivating an understanding of the importance of NHIs and their management, organizations can align their workforce toward more secure and conscious operations. A security culture promotes: * **Awareness and Vigilance:** Encourages employees to remain alert and report unusual behaviors. * **Shared Responsibility:** Empowers everyone, from developers to executives, to be proactive in managing security threats. * **Enhanced Communication:** Facilitates seamless exchange of information between departments such as IT and R&D, bridging potential gaps. Such a culture complements the technical components of cybersecurity strategies, fostering an environment where security is a shared priority. ### Proactive Compliance and Regulatory Adaptation Have you considered how proactive compliance can offer a competitive advantage? With regulatory evolve, particularly in finance, maintaining compliance is not just about meeting existing standards but also anticipating future requirements. NHIs play a significant role here by providing automated audit trails and policy enforcement mechanisms that help institutions keep up with regulatory changes without overstretching their resources. This proactive stance enables financial organizations to: * **Stay Ahead of Regulatory Changes:** Adapts swiftly to new regulations, avoiding potential fines or sanctions. * **Enhance Trust with Stakeholders:** Demonstrates commitment to security and compliance, boosting confidence among clients and partners. * **Streamline Compliance Processes:** Reduces the manual workload on security teams, allowing them to focus on strategic initiatives. By integrating NHIs into their compliance strategies, organizations not only meet regulatory demands but position themselves as leaders in security innovation. ### The Strategic Benefits of Automation in NHIs Why is automation pivotal for managing NHIs effectively? Where the volume and complexity of machine identities continue to grow, automation in NHIs management ensures efficiency and accuracy in processes often prone to human error. Automated systems streamline secrets rotation, decommissioning, and access monitoring, transforming NHI management from a cumbersome task into a streamlined function. Automation provides: * **Risk Reduction:** Minimizes human error, often the weakest link in security systems. * **Resource Optimization:** Frees up time for security professionals to focus on strategic initiatives, rather than routine tasks. * **Consistent Security Posture:** Ensures continuous compliance with security policies without manual intervention. Moreover, integrating automation with NHIs supports an agile security that can adapt swiftly to new challenges, a necessity. ### Exploring Agentic AI’s Contribution to Financial Services What role does Agentic AI play in advancing financial services? With the capability to analyze vast data sets and identify patterns, Agentic AI empowers financial institutions to make informed decisions quickly. Combined with robust NHI management, these advanced AI systems bolster security by ensuring that only authorized machine identities interact with sensitive data. The integration of Agentic AI benefits financial institutions by: * **Improving Fraud Detection:** Uses behavior analysis to identify anomalies and potential threats in real-time. * **Enhancing Customer Experience:** Streamlines operations, providing clients with a seamless and secure banking experience. * **Driving Innovation:** Accelerates the development of new products and services by providing data-driven insights. By utilizing Agentic AI in conjunction with secure NHIs, financial organizations unlock new potentials for growth, innovation, and efficiency. ### Bridging Gaps and Building Future-Ready Cybersecurity Strategies Are financial institutions prepared to face rapidly changing threats by aligning their security strategies with technological advancements? Bridging the gap between innovative technology and security protocols is crucial to building resilient and future-ready organizations. Adopting a forward-looking approach that encompasses NHIs and emerging technologies like Agentic AI ensures that financial institutions remain one step ahead of cyber adversaries. A comprehensive cybersecurity strategy entails: * **Integrative Solutions:** Combines state-of-the-art technology with strategic planning for a robust defense architecture. * **Continuous Assessment:** Regular evaluations of security measures against new vulnerabilities and emerging threats. * **Strategic Collaboration:** Encourages information sharing and cooperation among industry players to enhance collective security. By weaving NHIs into the fabric of their security strategies, financial institutions can navigate future challenges and capitalize on emerging opportunities. Additionally, businesses can develop sound incident response plans to deal with any breaches swiftly. These strategic efforts facilitate not only protection and compliance but also foster growth and competitive advantage in financials sectors, making them integral to the digital evolution of the industry. The post How is Agentic AI innovating financial sector practices appeared first on Entro. *** This is a Security Bloggers Network syndicated blog from Entro authored by Alison Mack. Read the original post at: https://entro.security/how-is-agentic-ai-innovating-financial-sector-practices/

How is Agentic AI innovating financial sector practices Are Non-Human Identities the Key to Securing the Financial Sector? One topic gaining notable traction is the management of Non-Human Identiti...

#Security #Bloggers #Network #Agentic #AI #Cybersecurity

Origin | Interest | Match

0 0 0 0
Original post on securityboulevard.com

Et Tu, RDP? Detecting Sticky Keys Backdoors with Brutus and WebAssembly Everyone knows that one person on the team who’s inexplicably lucky, the one who stumbles upon a random vulnerability seemi...

#Application #Security #DevOps #Security #Bloggers […]

[Original post on securityboulevard.com]

0 0 0 0
Post image

Randall Munroe’s XKCD ‘Installation’ via the comic artistry and dry wit of Randall Munroe, creator of XKCD Permalink The post Randall Munroe’s XKCD ‘Installation’ appeared first on Secu...

#Humor #Security #Bloggers #Network #Randall #Munroe #Sarcasm #satire #XKCD

Origin | Interest | Match

0 0 0 0
Original post on securityboulevard.com

How AI Changes the Role of Privileged Access in Cybersecurity For most organizations, privileged access management (PAM) has historically been treated as a security hygiene requirement. Secure the...

#Security #Bloggers #Network #AI #AI #Security […]

[Original post on securityboulevard.com]

0 0 0 0
Post image

💥 5 Social Media Metrics That Matter More Than Your Follower Count | @BadRedheadMedia.bsky.social vist.ly/4usep

Discover how engagement, reach, impressions, views, and following behavior reveal the true strength of your online presence.

#WritingCommunity #Bloggers

4 1 0 0
Preview
Academia and the “AI Brain Drain” In 2025, Google, Amazon, Microsoft and Meta collectively spent US$380 billion on building artificial-intelligence tools. That number is expected to surge still higher this year, to $650 billion, to fund the building of physical infrastructure, such as data centers (see go.nature.com/3lzf79q). Moreover, these firms are spending lavishly on one particular segment: top technical talent. Meta reportedly offered a single AI researcher, who had cofounded a start-up firm focused on training AI agents to use computers, a compensation package of $250 million over four years (see go.nature.com/4qznsq1). Technology firms are also spending billions on “reverse-acquihires”—poaching the star staff members of start-ups without acquiring the companies themselves. Eyeing these generous payouts, technical experts earning more modest salaries might well reconsider their career choices. Academia is already losing out. Since the launch of ChatGPT in 2022, concerns have grown in academia about an “AI brain drain.” Studies point to a sharp rise in university machine-learning and AI researchers moving to industry roles. A 2025 paper reported that this was especially true for young, highly cited scholars: researchers who were about five years into their careers and whose work ranked among the most cited were 100 times more likely to move to industry the following year than were ten-year veterans whose work received an average number of citations, according to a model based on data from nearly seven million papers.1 This outflow threatens the distinct roles of academic research in the scientific enterprise: innovation driven by curiosity rather than profit, as well as providing independent critique and ethical scrutiny. The fixation of “big tech” firms on skimming the very top talent also risks eroding the idea of science as a collaborative endeavor, in which teams—not individuals—do the most consequential work. Here, we explore the broader implications for science and suggest alternative visions of the future. Astronomical salaries for AI talent buy into a legend as old as the software industry: the 10x engineer. This is someone who is supposedly capable of ten times the impact of their peers. Why hire and manage an entire group of scientists or software engineers when one genius—or an AI agent—can outperform them? That proposition is increasingly attractive to tech firms that are betting that a large number of entry-level and even mid-level engineering jobs will be replaced by AI. It’s no coincidence that Google’s Gemini 3 Pro AI model was launched with boasts of “PhD-level reasoning,” a marketing strategy that is appealing to executives seeking to replace people with AI. But the lone-genius narrative is increasingly out of step with reality. Research backs up a fundamental truth: science is a team sport. A large-scale study of scientific publishing from 1900 to 2011 found that papers produced by larger collaborations consistently have greater impact than do those of smaller teams, even after accounting for self-citation.2 Analyses of the most highly cited scientists show a similar pattern: their highest-impact works tend to be those papers with many authors.3 A 2020 study of Nobel laureates reinforces this trend, revealing that—much like the wider scientific community—the average size of the teams that they publish with has steadily increased over time as scientific problems increase in scope and complexity.4 From the detection of gravitational waves, which are ripples in space-time caused by massive cosmic events, to CRISPR-based gene editing, a precise method for cutting and modifying DNA, to recent AI breakthroughs in protein-structure prediction, the most consequential advances in modern science have been collective achievements. Although these successes are often associated with prominent individuals—senior scientists, Nobel laureates, patent holders—the work itself was driven by teams ranging from dozens to thousands of people and was built on decades of open science: shared data, methods, software and accumulated insight. Building strong institutions is a much more effective use of resources than is betting on any single individual. Examples demonstrating this include the LIGO Scientific Collaboration, the global team that first detected gravitational waves; the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, a leading genomics and biomedical-research center behind many CRISPR advances; and even for-profit laboratories such as Google DeepMind in London, which drove advances in protein-structure prediction with its AlphaFold tool. If the aim of the tech giants and other AI firms that are spending lavishly on elite talent is to accelerate scientific progress, the current strategy is misguided. By contrast, well-designed institutions amplify individual ability, sustain productivity beyond any one person’s career and endure long after any single contributor is gone. Equally important, effective institutions distribute power in beneficial ways. Rather than vesting decision-making authority in the hands of one person, they have mechanisms for sharing control. Allocation committees decide how resources are used, scientific advisory boards set collective research priorities, and peer review determines which ideas enter the scientific record. And although the term “innovation by committee” might sound disparaging, such an approach is crucial to make the scientific enterprise act in concert with the diverse needs of the broader public. This is especially true in science, which continues to suffer from pervasive inequalities across gender, race and socio-economic and cultural differences.5 ### Need for alternative vision This is why scientists, academics and policymakers should pay more attention to how AI research is organized and led, especially as the technology becomes essential across scientific disciplines. Used well, AI can support a more equitable scientific enterprise by empowering junior researchers who currently have access to few resources. Instead, some of today’s wealthiest scientific institutions might think that they can deploy the same strategies as the tech industry uses and compete for top talent on financial terms—perhaps by getting funding from the same billionaires who back big tech. Indeed, wage inequality has been steadily growing within academia for decades.6 But this is not a path that science should follow. The ideal model for science is a broad, diverse ecosystem in which researchers can thrive at every level. Here are three strategies that universities and mission-driven labs should adopt instead of engaging in a compensation arms race. First, universities and institutions should stay committed to the public interest. An excellent example of this approach can be found in Switzerland, where several institutions are coordinating to build AI as a public good rather than a private asset. Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) and the Swiss Federal Institute of Technology (ETH) in Zurich, working with the Swiss National Supercomputing Centre, have built Apertus, a freely available large language model. Unlike the controversially-labelled “open source” models built by commercial labs—such as Meta’s LLaMa, which has been criticized for not complying with the open-source definition (see go.nature.com/3o56zd5)—Apertus is not only open in its source code and its weights (meaning its core parameters), but also in its data and development process. Crucially, Apertus is not designed to compete with “frontier” AI labs pursuing superintelligence at enormous cost and with little regard for data ownership. Instead, it adopts a more modest and sustainable goal: to make AI trustworthy for use in industry and public administration, strictly adhering to data-licensing restrictions and including local European languages.7 Principal investigators (PIs) at other institutions globally should follow this path, aligning public funding agencies and public institutions to produce a more sustainable alternative to corporate AI. Second, universities should bolster networks of researchers from the undergraduate to senior-professor levels—not only because they make for effective innovation teams, but also because they serve a purpose beyond next quarter’s profits. The scientific enterprise galvanizes its members at all levels to contribute to the same projects, the same journals and the same open, international scientific literature—to perpetuate itself across generations and to distribute its impact throughout society. Universities should take precisely the opposite hiring strategy to that of the big tech firms. Instead of lavishing top dollar on a select few researchers, they should equitably distribute salaries. They should raise graduate-student stipends and postdoc salaries and limit the growth of pay for high-profile PIs. Third, universities should show that they can offer more than just financial benefits: they must offer distinctive intellectual and civic rewards. Although money is unquestionably a motivator, researchers also value intellectual freedom and the recognition of their work. Studies show that research roles in industry that allow publication attract talent at salaries roughly 20% lower than comparable positions that prohibit it (see go.nature.com/4cbjxzu). Beyond the intellectual recognition of publications and citation counts, universities should recognize and reward the production of public goods. The tenure and promotion process at universities should reward academics who supply expertise to local and national governments, who communicate with and engage the public in research, who publish and maintain open-source software for public use and who provide services for non-profit groups. Furthermore, institutions should demonstrate that they will defend the intellectual freedom of their researchers and shield them from corporate or political interference. In the United States today, we see a striking juxtaposition between big tech firms, which curry favour with the administration of US President Donald Trump to win regulatory and trade benefits, and higher-education institutions, which suffer massive losses of federal funding and threats of investigation and sanction. Unlike big tech firms, universities should invest in enquiry that challenges authority. We urge leaders of scientific institutions to reject the growing pay inequality rampant in the upper echelons of AI research. Instead, they should compete for talent on a different dimension: the integrity of their missions and the equitableness of their institutions. These institutions should focus on building sustainable organizations with diverse staff members, rather than bestowing a bounty on science’s 1%. ### References 1. Jurowetzki, R., Hain, D. S., Wirtz, K. & Bianchini, S. AI Soc. 40, 4145–4152 (2025). 2. Larivière, V., Gingras, Y., Sugimoto, C. R. & Tsou, A. J. Assoc. Inf. Sci. Technol. 66, 1323–1332 (2015). 3. Aksnes, D. W. & Aagaard, K. J. Data Inf. Sci. 6, 41–66 (2021). 4. Li, J., Yin, Y., Fortunato, S. & Wang, D. J. R. Soc. Interface 17, 20200135 (2020). 5. Graves, J. L. Jr, Kearney, M., Barabino, G. & Malcom, S. Proc. Natl Acad. Sci. USA 119, e2117831119 (2022). 6. Lok, C. Nature 537, 471–473 (2016). 7. Project Apertus. Preprint at arXiv https://doi.org/10.48550/arXiv.2509.14233 (2025). _This essay was written with Nathan E. Sanders, and originally appeared inNature._ *** This is a Security Bloggers Network syndicated blog from Schneier on Security authored by Bruce Schneier. Read the original post at: https://www.schneier.com/blog/archives/2026/03/academia-and-the-ai-brain-drain.html

Academia and the “AI Brain Drain” In 2025, Google, Amazon, Microsoft and Meta collectively spent US$380 billion on building artificial-intelligence tools. That number is expected to surge still...

#Security #Bloggers #Network #AI #LLM #Uncategorized

Origin | Interest | Match

0 0 1 0
Original post on securityboulevard.com

Decoding the White House Cyber Strategy: Why Resilience Matters Now America’s new National Cyber Strategy sends a very clear message that cybersecurity is now about resilience, not just defense. ...

#Security #Bloggers #Network #Breach #Readiness […]

[Original post on securityboulevard.com]

0 0 0 0
Original post on securityboulevard.com

RSAC Innovation Sandbox | Token Security: Advocate of the Machine-First Identity Security Concept Company Introduction Token Security[1] (see Figure 1) is a cybersecurity company focusing on the...

#Security #Bloggers #Network #Agent/MCP #Agentic #AI […]

[Original post on securityboulevard.com]

0 0 0 0
Preview
I'm Working on a Horror Novel and Comic Credit to  10 Parts of a Book You May Not Know About This might sound crazy, and I've said this so many times, but I'm working on a horror...

I'm Working on a Horror Novel and Comic
allthingshorror67.blogspot.com/2026/03/im-w...

#announcement #horror #writing #project #personal #life #process #blogging #bloggers #blogger #blogs #blog #blogpost

0 0 0 0
Original post on securityboulevard.com

How smart can Agentic AI become in protecting assets Can Smart Agentic AI Revolutionize Asset Protection? How can organizations harness the power of Agentic AI to safeguard their most valuable asse...

#Data #Security #Security #Bloggers #Network […]

[Original post on securityboulevard.com]

0 0 0 0
Original post on securityboulevard.com

Are scalable cloud-native security solutions the future How Can Non-Human Identities Revolutionize Cloud Security? The question of how to effectively manage Non-Human Identities (NHIs) is gaining u...

#Cloud #Security #Security #Bloggers #Network […]

[Original post on securityboulevard.com]

0 0 0 0
Original post on securityboulevard.com

USENIX Security ’25 (Enigma Track) – • Inside Out: Security Designed With, Not For Presenter: Kausalya Ganesh, Cisco Systems, Inc Our thanks to USENIX Security '25 (Enigma Track) (USENIX ...

#Network #Security #Security #Bloggers #Network #appsec […]

[Original post on securityboulevard.com]

0 0 0 0
Post image

Zscaler + CimTrak: Integrity-Driven Zero Trust for C2C Across the first two blogs in this series, we confronted a hard truth: Cybersecurity doesn't fail because organizations lack tools. It fai...

#Security #Bloggers #Network #zero #trust

Origin | Interest | Match

0 0 0 0
Preview
Deux Photos autour du Centre Pompidou : Géométrie et Saturation Urbaine L'Art de l'Invisible : Quand la Structure devient Spectacle Dans le monde de la photographie d'architecture, on cherche souvent l'angle épur...

Deux Photos autour du Centre Pompidou : Géométrie et Saturation Urbaine
---
#paris #centrepompidou #architecture #photography #photographer #blog #blogphotos #bloggers #urbanscape

5 0 0 0
Post image

#writingcommunity #writers #authors #literaryagents #reporters #bloggers what @kevinmkruse.bsky.social said.

#optout #grammarly

6 3 0 0
Original post on securityboulevard.com

This Android vulnerability can break your lock screen in under 60 seconds Researchers showed how attackers could pull encryption keys, recover the PIN, and access sensitive data from affected devic...

#Mobile #Security #Security #Bloggers #Network […]

[Original post on securityboulevard.com]

0 0 0 0
Book 3 in the series- How We Eat compiled by Yvette Prior

Book 3 in the series- How We Eat compiled by Yvette Prior

Don't have time to read? Here's a fantastic new anthology full of entertaining short stories and poems you can read on a work tea break. Recipes too. Part 3 in a series #books #writing #newrelease #review #bloggers #howeeeat

0 0 0 0
Original post on securityboulevard.com

Microsoft Authenticator could leak login codes—update your app now A bug in Microsoft Authenticator on Android and iOS could allow malicious apps on the same device to intercept authentication co...

#Mobile #Security #Security #Bloggers #Network […]

[Original post on securityboulevard.com]

0 0 0 0
four book covers for MM fantasy series tour

four book covers for MM fantasy series tour

🌈 #Bloggers and 📚 #Reviewers There's still time to join the SERIES TOUR for DEATH’S EMBRACE by H. L. Moore

#fantasy #gay #mmromance #promoLGBTQ #lgbtbooks #lgbtreaders #lgbt #bookbloggers #arcs #arcreviews #gaybookpromotions

➡️ More info and sign up here:
forms.gle/bYGyKWNzRctm...

1 0 0 0
Preview
Want To Increase Your Blog Traffic? Ways To Upgrade Old Blog Posts Want To Increase Your Blog Traffic? Ways To Upgrade Old Blog Posts: As bloggers our main focus is on creating new content for our blogs, but what about those older posts from our archives?

Want To Increase Your Blog Traffic? Ways To Upgrade Old Blog Posts www.jolinsdell.com/2019/04/want... #Blogging #Bloggers

0 0 0 0
Post image

SEO in 2026: A Beginner’s Guide www.jolinsdell.com/2026/03/seo-... #Blogging #SEO #Bloggers

0 0 0 0