Massimo Bonanni's Avatar

Massimo Bonanni

@massimobonanni

"Paranormal Trainer, with the head in the Cloud and all the REST in microservices!" (cit.)

63
Followers
68
Following
1,837
Posts
10.11.2024
Joined
Posts Following

Latest posts by Massimo Bonanni @massimobonanni

Preview
Available today: GPT-5.4 Thinking in Microsoft 365 Copilot Today, we’re bringing OpenAI’s GPT‑5.4 Thinking to Microsoft 365 Copilot and Microsoft Copilot Studio—available in addition to the recent GPT-5.3 Instant update. With GPT‑5.4 Thinking, Copilot can think deeper on complex work by combining advances in reasoning, coding, and agentic workflows—helping it work through technical prompts and longer tasks with higher-quality outputs, and less back-and-forth. Work IQ brings relevant work context into Copilot so it can reason, personalize, and help you turn deeper thinking into context-aware drafts, slides, and spreadsheets. We are committed to bringing you the latest cutting-edge AI innovation and model choice built for work and tailored to your business needs—with the security, compliance, and privacy that you expect from Microsoft. Get started today GPT-5.4 Thinking is now available in Copilot Studio early release cycle environments and begins rolling out today to Microsoft 365 Copilot users with priority access and Microsoft 365 Copilot Chat users with standard access. Learn more about standard versus priority access here. In Copilot Chat, you can select GPT‑5.4 Think deeper from the model selector under More, and in Copilot Studio you can select GPT‑5.4 Reasoning. Our team will continue to refine the experience based on your feedback. Learn more about Microsoft 365 Copilot and Microsoft Copilot Studio and start transforming work with Copilot today. For model details, learn more about GPT-5.4 Thinking here. For the latest research insights on the future of work and generative AI, visit WorkLab.

Available today: GPT-5.4 Thinking in Microsoft 365 Copilot

06.03.2026 23:08 👍 0 🔁 0 💬 0 📌 0
Preview
How to scan for vulnerabilities with GitHub Security Lab’s open source AI-powered framework GitHub Security Lab Taskflow Agent is very effective at finding Auth Bypasses, IDORs, Token Leaks, and other high-impact vulnerabilities. The post How to scan for vulnerabilities with GitHub Security Lab’s open source AI-powered framework appeared first on The GitHub Blog.

How to scan for vulnerabilities with GitHub Security Lab’s open source AI-powered framework

06.03.2026 22:31 👍 1 🔁 0 💬 0 📌 0
Preview
Figma MCP server can now generate design layers from VS Code GitHub Copilot users can now connect to the Figma MCP server to both pull design context into code and send rendered UI to Figma as editable frames. Together, these capabilities… The post Figma MCP server can now generate design layers from VS Code appeared first on The GitHub Blog.

Figma MCP server can now generate design layers from VS Code

06.03.2026 21:52 👍 1 🔁 0 💬 0 📌 0
Preview
GitHub Copilot in Visual Studio Code v1.110 – February release The Visual Studio Code February 2026 release makes agents practical for longer-running and more complex tasks. This gives you more control over how they run, new ways to extend what… The post GitHub Copilot in Visual Studio Code v1.110 – February release appeared first on The GitHub Blog.

GitHub Copilot in Visual Studio Code v1.110 – February release

06.03.2026 20:46 👍 0 🔁 0 💬 0 📌 0
Preview
From Manual Document Processing to AI-Orchestrated Intelligence Building an IDP Pipeline with Azure Durable Functions, DSPy, and Real-Time AI Reasoning The Problem Think about what happens when a loan application, an insurance claim, or a trade finance document arrives at an organisation. Someone opens it, reads it, manually types fields into a system, compares it against business rules, and escalates for approval. That process touches multiple people, takes hours or days, and the accuracy depends entirely on how carefully it's done. Organizations have tried to automate parts of this before — OCR tools, templated extraction, rule-based routing. But these approaches are brittle. They break when the document format changes, and they can't reason about what they're reading. The typical "solution" falls into one of two camps: Manual processing. Humans read, classify, and key in data. Accurate but slow, expensive, and impossible to scale. Single-model extraction. Throw an OCR/AI model at the document, trust the output, push to downstream systems. Fast but fragile — no validation, no human checkpoint, no confidence scoring. What's missing is the middle ground: an orchestrated, multi-model pipeline with built-in quality gates, real-time visibility, and the flexibility to handle any document type without rewriting code. That's what IDP Workflow is — a six-step AI-orchestrated pipeline that processes documents end to end, from a raw PDF to structured, validated data, with human oversight built in. This isn't automation replacing people. It's AI doing the heavy lifting and humans making the final call. Architecture at a Glance POST /api/idp/start → Step 1 PDF Extraction (Azure Document Intelligence → Markdown) → Step 2 Classification (DSPy ChainOfThought) → Step 3 Data Extraction (Azure Content Understanding + DSPy LLM, in parallel) → Step 4 Comparison (field-by-field diff) → Step 5 Human Review (HITL gate — approve / reject / edit) → Step 6 AI Reasoning Agent (validation, consolidation, recommendations) → Final structured result The backend is Azure Durable Functions (Python) on Flex Consumption — customers only pay for what they use, and it scales automatically. The frontend is a Next.js dashboard with SignalR real-time updates and a Reaflow workflow visualization. Every step broadcasts stepStarted → stepCompleted / stepFailed events so the UI updates as work progresses. The pattern applies wherever organisations receive high volumes of unstructured documents that need to be classified, data-extracted, validated, and approved. The Six Steps, Explained Step 1: PDF → Markdown We use Azure Document Intelligence with the prebuilt-layout model to convert uploaded PDFs into structured Markdown — preserving tables, headings, and reading order. Markdown turns out to be a much better intermediate representation for LLMs than raw text or HTML. class PDFMarkdownExtractor: async def extract(self, pdf_path: str) -> tuple[PDFContent, Step01Output]: poller = self.client.begin_analyze_document( "prebuilt-layout", analyze_request=AnalyzeDocumentRequest(url_source=pdf_path), output_content_format=DocumentContentFormat.MARKDOWN, ) result: AnalyzeResult = poller.result() # Split into per-page Markdown chunks... Output: Per-page Markdown content, total page count, and character stats. Step 2: Document Classification (DSPy) Rather than hard-coding classification rules, we use DSPy with ChainOfThought prompting. DSPy lets us define classification as a signature — a declarative input/output contract — and the framework handles prompt optimization. class DocumentClassificationSignature(dspy.Signature): """Classify document page into predefined categories.""" page_content: str = dspy.InputField(desc="Markdown content of the document page") available_categories: str = dspy.InputField(desc="Available categories") classification: DocumentClassificationOutput = dspy.OutputField() Categories are loaded from a domain-specific classification_categories.json. Adding new categories means editing a JSON file, not code. Critically, classification is per-page, not per-document. A multi-page loan application might contain a loan form on page 1, income verification on page 2, and a property valuation on page 3 — each classified independently with its own confidence score and detected field indicators. This means multi-section documents are handled correctly downstream. Why DSPy? It gives us structured, typed outputs via Pydantic models, automatic prompt optimization, and clean separation between the what (signature) and the how (ChainOfThought, Predict, etc.). Step 3: Dual-Model Extraction (Run in Parallel) This is where things get interesting. We run two independent extractors in parallel: Azure Content Understanding (CU): A specialized Azure service that takes the raw PDF and applies a domain-specific schema to extract structured fields. DSPy LLM Extractor: Uses the Markdown from Step 1 with a dynamically generated Pydantic model (built from the domain's extraction_schema.json) to extract the same fields via an LLM. The LLM provider is selectable at runtime — Azure OpenAI, Claude, or open-weight models deployed on Azure (Qwen, DeepSeek, Llama, Phi, and more from the Azure AI Model Catalog). # In the orchestrator — fire both tasks at once azure_task = context.call_activity("activity_step_03_01_azure_extraction", input) dspy_task = context.call_activity("activity_step_03_02_dspy_extraction", input) results = yield context.task_all([azure_task, dspy_task]) Both extractors use the same domain-specific schema but approach the problem differently. Running two models gives us a natural cross-check: if both extractors agree on a field value, confidence is high. If they disagree, we know exactly where to focus human attention — not the entire document, just the specific fields that need it. Multi-Provider LLM Support The DSPy extraction and classification steps aren't locked to a single model provider. From the dashboard, users can choose between: Azure OpenAI in Foundry Models — GPT-4.1, o3-mini (default) Claude on Azure — Anthropic's Claude models Foundry Models — Open-weight models deployed on Azure via Foundry Models: Qwen 2.5 72B, DeepSeek V3/R1, Llama 3.3 70B, Phi-4, and more The third option is key: instead of routing through a third-party service, you deploy open-weight models directly on Azure as serverless API endpoints through Azure AI Foundry. These endpoints expose an OpenAI-compatible API, so DSPy talks to them the same way it talks to GPT-4.1 — just with a different api_base. You get the model diversity of the open-weight ecosystem with Azure's enterprise security, compliance, and network isolation. A factory pattern in the backend resolves the selected provider and model at runtime, so switching from Azure OpenAI to Qwen on Azure AI is a single dropdown change — no config edits, no redeployment. This makes it easy to benchmark different models against the same extraction schema and compare quality. Step 4: Field-by-Field Comparison The comparator aligns the outputs of both extractors and produces a diff report: matching fields, mismatches, fields found by only one extractor, and a calculated match percentage. This feeds directly into the human review step. Output: "Match: 87.5% (14/16 fields)" Step 5: Human-in-the-Loop (HITL) Gate The pipeline pauses and waits for a human decision. The Durable Functions orchestrator uses wait_for_external_event() with a configurable timeout (default: 24 hours) implemented as a timer race: review_event = context.wait_for_external_event(HITL_REVIEW_EVENT) timeout = context.create_timer( context.current_utc_datetime + timedelta(hours=HITL_TIMEOUT_HOURS) ) winner = yield context.task_any([review_event, timeout]) The frontend shows a side-by-side comparison panel where reviewers can see both values for each disputed field — pick Azure's value, the LLM's value, or type in a correction. They can add notes explaining their decision, then approve or reject. If nobody responds within the timeout, it auto-escalates (configurable behavior). The orchestrator doesn't poll. It doesn't check a queue. The moment the reviewer submits their decision, the pipeline resumes automatically — using Durable Functions' native external event pattern. Step 6: AI Reasoning Agent The final step uses an AI agent with tool-calling to perform structured validation, consolidate field values, and generate a confidence score. This isn't just a prompt — it's an agent backed by the Microsoft Agent Framework with purpose-built tools: validate_fields — runs domain-specific validation rules (data types, ranges, cross-field logic) consolidate_extractions — merges Azure CU + DSPy outputs using confidence-weighted selection generate_summary — produces a natural-language summary with recommendations The reasoning step can use standard models or reasoning-optimised models like o3 or o3-mini for higher-stakes validation. The agent streams its reasoning process to the frontend in real time — validation results, confidence scoring, and recommendations all appear as they're generated. Domain-Driven Design: Zero-Code Extensibility One of the most powerful design choices: adding a new document type requires zero code changes. Each domain is a folder under idp_workflow/domains/ with four JSON files: idp_workflow/domains/insurance_claims/ ├── config.json # Domain metadata, thresholds, settings ├── classification_categories.json # Page-level classification taxonomy ├── extraction_schema.json # Field definitions (used by both extractors) └── validation_rules.json # Business rules for the reasoning agent The extraction_schema.json is particularly interesting — it's consumed by both the Azure CU service (which builds an analyzer from it) and the DSPy extractor (which dynamically generates a Pydantic model at runtime): def create_extraction_model_from_schema(schema: dict) -> type[BaseModel]: """Dynamically create a Pydantic model from an extraction schema JSON.""" # Maps schema field definitions → Pydantic field annotations # Supports nested objects, arrays, enums, and optional fields We currently ship four domains out of the box: insurance claims, home loans, small business lending, and trade finance. See It In Action: Processing a Home Loan Application To make this concrete, here's what happens when you process a multi-page home loan PDF — personal details, financial tables, and mixed content. Upload & Extract. The document hits the dashboard and Step 1 kicks off. Azure Document Intelligence converts all pages to structured Markdown, preserving tables and layout. You can preview the Markdown right in the detail panel. Per-Page Classification. Step 2 classifies each page independently: Page 1 is a Loan Application Form, Page 2 is Income Verification, Page 3 is a Property Valuation. Each has its own confidence score and detected fields listed. Dual Extraction. Azure CU and the DSPy LLM extractor run simultaneously. You can watch both progress bars in the dashboard. Comparison. The system finds 16 fields total. 14 match between the two extractors. Two fields differ — the annual income figure and the loan term. Those are highlighted for review. Human Review. The reviewer sees both values side by side for each disputed field, picks the correct value (or types a correction), adds a note, and approves. The moment they submit, the pipeline resumes — no polling. AI Reasoning. The agent validates against home loan business rules: loan-to-value ratio, income-to-repayment ratio, document completeness. Validation results stream in real time. Final output: 92% confidence, 11 out of 12 validations passed. The AI flags a minor discrepancy in employment dates and recommends approval with a condition to verify employment tenure. Result: A document that would take 30–45 minutes of manual processing, handled in under 2 minutes — with complete traceability. Every step, every decision, timestamped in the event log. Real-Time Frontend with SignalR Every orchestration step broadcasts events through Azure SignalR Service, targeted to the specific user who started the workflow: def _broadcast(context, user_id, event, data): return context.call_activity("notify_user", { "user_id": user_id, "instance_id": context.instance_id, "event": event, "data": data, }) The frontend generates a session-scoped userId, passes it via the x-user-id header during SignalR negotiation, and receives only its own workflow events. No Pub/Sub subscriptions to manage. The Next.js frontend uses: Zustand + Immer for state management (4 stores: workflow, events, reasoning, UI) Reaflow for the animated pipeline visualization React Query for data fetching Tailwind CSS for styling The result is a dashboard where you can upload a document and watch each pipeline step execute in real time. Infrastructure: Production-Ready from Day One The entire stack deploys with a single command using Azure Developer CLI (azd): azd up What gets provisioned: ResourcePurposeAzure Functions (Flex Consumption)Backend API + orchestrationAzure Static Web AppNext.js frontendDurable Task SchedulerOrchestration state managementStorage AccountDocument blob storageApplication InsightsMonitoring and diagnosticsNetwork Security PerimeterStorage network lockdown Infrastructure is defined in Bicep with: Parameterized configuration (memory, max instances, retention) RBAC role assignments via a consolidated loop Two-region deployment (Functions + SWA have different region availability) Network Security Perimeter deployed in Learning mode, switched to Enforced post-deploy Key Engineering Decisions Why Durable Functions? Orchestrating a multi-step pipeline with parallel execution, external event gates, timeouts, and retry logic is exactly what Durable Functions was designed for. The orchestrator is a Python generator function — each yield is a checkpoint that survives process restarts: def idp_workflow_orchestration(context: DurableOrchestrationContext): step1 = yield from _execute_step(context, ...) # PDF extraction step2 = yield from _execute_step(context, ...) # Classification results = yield context.task_all([azure_task, dspy_task]) # Parallel extraction # ... HITL gate, reasoning agent, etc. No external queue management. No state database. No workflow engine to operate. Why Dual Extraction? Running two independent models on the same document gives us: Cross-validation — agreement between models is a strong confidence signal Coverage — one model might extract fields the other misses Auditability — human reviewers can see both outputs side by side Graceful degradation — if one service is down, the other still produces results Why DSPy over Raw Prompts? DSPy provides: Typed I/O — Pydantic models as signatures, not string parsing Composability — ChainOfThought, Predict, ReAct are interchangeable modules Prompt optimization — once you have labeled examples, DSPy can auto-tune prompts LM scoping — with dspy.context(lm=self.lm): isolates model configuration per call Getting Started # Clone git clone https://github.com/lordlinus/idp-workflow.git cd idp-workflow # DTS Emulator (requires Docker) docker run -d -p 8080:8080 -p 8082:8082 \ -e DTS_TASK_HUB_NAMES=default,idpworkflow \ mcr.microsoft.com/dts/dts-emulator:latest # Backend python -m venv .venv && source .venv/bin/activate pip install -r requirements.txt func start # Frontend (separate terminal) cd frontend && npm install && npm run dev You'll also need Azurite (local storage emulator) running, plus Azure OpenAI, Document Intelligence, Content Understanding, and SignalR Service endpoints configured in local.settings.json. See the Local Development Guide for the full setup. Who Is This For? If any of these sound familiar, IDP Workflow was built for you: "We're drowning in documents." — High-volume document intake with manual processing bottlenecks. "We tried OCR but it breaks on new formats." — Brittle extraction that fails when layouts change. "Compliance needs an audit trail for every decision." — Regulated industries where traceability is non-negotiable. This is an AI-powered document processing platform — not a point OCR tool — with human oversight, dual AI validation, and domain extensibility built in from day one. What's Next Prompt optimization — using DSPy's BootstrapFewShot with domain-specific training examples Batch processing — fan-out/fan-in orchestration for processing document queues Custom evaluators — automated quality scoring per domain Additional domains — community-contributed domain configurations Try It Out The project is fully open source: github.com/lordlinus/idp-workflow Deploy to your own Azure subscription with azd up, upload a PDF from the sample_documents/ folder, and watch the pipeline run. We'd love feedback, contributions, and new domain configurations. Open an issue or submit a PR!

From Manual Document Processing to AI-Orchestrated Intelligence

06.03.2026 05:21 👍 0 🔁 0 💬 0 📌 0
Preview
GPT-5.4 is generally available in GitHub Copilot GPT-5.4, OpenAI’s latest agentic coding model, is now rolling out in GitHub Copilot. In our early testing of real-world, agentic, and software development capabilities, GPT-5.4 consistently hits new rates of… The post GPT-5.4 is generally available in GitHub Copilot appeared first on The GitHub Blog.

GPT-5.4 is generally available in GitHub Copilot

06.03.2026 00:51 👍 0 🔁 0 💬 0 📌 0
Preview
Discover and manage agent activity with new session filters GitHub Enterprise AI Controls and agent control plane now includes additional session filters, making it easier to discover and manage agent activity across your enterprise. What’s new In addition to… The post Discover and manage agent activity with new session filters appeared first on The GitHub Blog.

Discover and manage agent activity with new session filters

06.03.2026 00:51 👍 0 🔁 0 💬 0 📌 0
Preview
Quick access to merge status in pull requests is in public preview We are rolling out the pull request merge status at the top of every pull request page! Check merge readiness from anywhere in the pull request experience, including the new… The post Quick access to merge status in pull requests is in public preview appeared first on The GitHub Blog.

Quick access to merge status in pull requests is in public preview

05.03.2026 23:32 👍 0 🔁 0 💬 0 📌 0
Preview
GitHub Copilot coding agent for Jira is now in public preview You can now assign Jira issues to GitHub Copilot coding agent, our asynchronous, autonomous agent, and get AI-generated draft pull requests created in your GitHub repository. When you assign a… The post GitHub Copilot coding agent for Jira is now in public preview appeared first on The GitHub Blog.

GitHub Copilot coding agent for Jira is now in public preview

05.03.2026 23:03 👍 0 🔁 0 💬 0 📌 0
Preview
Insiders (version 1.111) Learn what is new in Visual Studio Code 1.111 (Insiders) Read the full article

Insiders (version 1.111)

Learn what is new in Visual Studio Code 1.111 (Insiders) Read the full article

05.03.2026 22:53 👍 0 🔁 0 💬 0 📌 0
Preview
Making agents practical for real-world development Explore agent orchestration, extensibility, and continuity in VS Code 1.110: lifecycle hooks, agent skills, session memory, and integrated browser tools. Read the full article

Making agents practical for real-world development

Explore agent orchestration, extensibility, and continuity in VS Code 1.110: lifecycle hooks, agent skills, session memory, and integrated browser tools. Read the full article

05.03.2026 22:53 👍 0 🔁 0 💬 0 📌 0
Preview
Copilot code review now runs on an agentic architecture Copilot code review now runs on an agentic tool-calling architecture and is generally available for all users with Copilot Pro, Copilot Pro+, Copilot Business, and Copilot Enterprise. For background, see… The post Copilot code review now runs on an agentic architecture appeared first on The GitHub Blog.

Copilot code review now runs on an agentic architecture

05.03.2026 21:33 👍 0 🔁 0 💬 0 📌 0
Preview
Hierarchy view improvements and file uploads in issue forms Hierarchy view improvements in GitHub Projects You now have several improvements to hierarchy view in GitHub Projects based on your feedback: Filter sub-issues: You can now filter sub-issues using syntax… The post Hierarchy view improvements and file uploads in issue forms appeared first on The GitHub Blog.

Hierarchy view improvements and file uploads in issue forms

05.03.2026 21:33 👍 0 🔁 0 💬 0 📌 0
Preview
Add images to agent sessions Quick start your agent session by starting from an image. Simply paste, drag, or click the image icon to wherever you work with agents on github.com (e.g., from the recently… The post Add images to agent sessions appeared first on The GitHub Blog.

Add images to agent sessions

05.03.2026 21:33 👍 0 🔁 0 💬 0 📌 0
Preview
Pick a model for @copilot in pull request comments You can ask Copilot coding agent to make changes in any pull request by mentioning @copilot. This works in pull requests created by Copilot and in pull requests created by… The post Pick a model for @copilot in pull request comments appeared first on The GitHub Blog.

Pick a model for @copilot in pull request comments

05.03.2026 21:33 👍 0 🔁 0 💬 0 📌 0
Preview
60 million Copilot code reviews and counting How Copilot code review helps teams keep up with AI-accelerated code changes. The post 60 million Copilot code reviews and counting appeared first on The GitHub Blog.

60 million Copilot code reviews and counting

05.03.2026 20:57 👍 0 🔁 0 💬 0 📌 0
Preview
Release v1.0 of the official MCP C# SDK Discover what’s new in the v1.0 release of the official MCP C# SDK, including enhanced authorization, richer metadata, and powerful patterns for tool calling and long-running requests. The post Release v1.0 of the official MCP C# SDK appeared first on .NET Blog.

Release v1.0 of the official MCP C# SDK

05.03.2026 18:45 👍 0 🔁 0 💬 0 📌 0
Preview
Scaling AI opportunity across the globe: Learnings from GitHub and Andela Developers connected to Andela share how they’re learning AI tools inside real production workflows. The post Scaling AI opportunity across the globe: Learnings from GitHub and Andela appeared first on The GitHub Blog.

Scaling AI opportunity across the globe: Learnings from GitHub and Andela

05.03.2026 17:57 👍 0 🔁 0 💬 0 📌 0
Preview
Unified AI Weather Forecasting Pipeline thru Aurora, Foundry, and Microsoft Planetary Computer Pro Weather shapes some of the most critical decisions we make, from protecting our critical infrastructure and global supply chains, to keeping communities safe during extreme events. As climate variability becomes more volatile, an organization’s ability to predict, assess, and plan their response to extreme weather is a defining capability for modern infrastructure owners & operators. This is especially true for the energy and utility sector — even small delays in preparations and response can cascade into massive operational risk and financial impacts, including widespread outages and millions in recovery costs. Operators of critical power infrastructure are increasingly turning to AI-powered solutions to reduce their operational and service delivery risk. “As the physical risks to our grid systems grow, so too does our technological capacity to anticipate them. Artificial intelligence has quietly reached a maturity point in utility operations-not just as a tool for optimization, about as a strategic foresight engine. The opportunity is clear: with the right data, infrastructure, and operational alignment, AI outage prediction utility grid strategies can now forecast vulnerabilities with precision and help utilities transition from reactive to preventive risk models.” – Article by Think Power Solutions Providing direct control of their data and AI analytics allows providers to make better, more actionable insights for their operations. Today, we’ll demonstrate and explore how organizations can use the state-of-the-art Aurora weather model in Microsoft Foundry with weather data provided by Microsoft Planetary Computer (MPC), an Azure based geospatial data management platform, to develop a utility industry-specific impact prediction capability. Taking Control of your Weather Prediction Microsoft Research first announced Aurora in June 2024, a cutting-edge AI foundation model enabling locally executed, on-demand, global weather forecasting, and storm-trajectory prediction generated from publicly available weather data. Two months later, Aurora became available on Microsoft Foundry elevating on-demand weather forecasting from a self-hosted experience to managed deployments, readying Aurora for broader enterprise and public adoption. Aurora’s scientific foundations and forecasting performance were peer‑reviewed and published in Nature, providing independent validation across global benchmarks. Its evolution continues with a strong commitment to openness and interoperability: In November 2025, Microsoft announced plans to open-source Aurora to accelerate innovation across the global research and developer community. Building upon the innovation and continued development of Aurora, today we are showcasing how organizations can operationalize this state-of-the-art capability with Microsoft Planetary Computer and Microsoft Planetary Computer Pro. By bringing together the vast public geospatial data stores in Planetary Computer, with the private data managed by Planetary Computer Pro, organizations can unify their weather prediction and geospatial data in a single platform, simplifying data processing pipelines and data management. This advancement allows enterprise customers to take control of their own weather forecasting on their own timeline. A Unified Weather Prediction Data Pipeline In addition, a key pain-point for energy and utility companies is the inability to reliably ingest, store, and operationalize high-volume weather data. Model inputs and outputs often sit scattered across fragmented pipelines and platforms, making decisions difficult to trace, reproduce, and reference over time. For example, referenced in articles, many utility companies have to pull public data from various silos, maintain GIS layers in another, and run operational planning in a separate environment—forcing teams to manually stitch together forecasts, assets, and risk assessments, introducing delays exactly when rapid decisions matter most. With the MPC Pro + Microsoft Foundry pipeline, utility companies transition from fragmented, manual workflows to a single operating platform – where the value lies in a seamless end-to-end data-to-model pipeline. Users can leverage Aurora on Microsoft Foundry alongside Microsoft Planetary Computer Pro’s geospatial data platform to unlock the following unified workflow: Source near real time weather data from Planetary Computer Run Aurora in Microsoft Foundry Fuse weather prediction results with geospatial data in Planetary Computer Pro for rapid assessment and post processing A Ready-to-use reference architecture This reference architecture provides a reusable pattern for operationalizing frontier weather models with Microsoft Planetary Computer Pro and Microsoft Foundry. Our architecture feeds updated global weather data, hosted by Microsoft Planetary Computer, to the Microsoft Foundry hosted model, then fuses those prediction results with enterprise geospatial context for analysis, decision-making, and action. Each component plays a distinct role in ensuring forecasts are timely, scalable, and directly usable within operational workflows. Near Real-Time Weather Data Microsoft Planetary Computer automatically ingests, indexes, and distributes up-to-date global weather data from the European Centre for Medium-Range Weather Forecasts (ECMWF) four times per day. This fully managed data pipeline ensures that the latest atmospheric datasets are continuously refreshed, standardized, and readily accessible, eliminating the need for manual data acquisition or preprocessing. Storing and Centralizing Public and Private Geospatial Data on Microsoft Planetary Computer Pro Microsoft Planetary Computer Pro enables utility operators to store, manage, and access both public and private geospatial datasets within a single Azure platform. With a Microsoft Planetary Computer Pro GeoCatalog, organizations can centralize ECMWF weather data alongside infrastructure and location data to support downstream analyses. Microsoft Foundry Hosts and Runs Weather Prediction Model on Demand Microsoft Foundry provides model access and the infrastructure required to support execution of Aurora and other weather forecasting models. Users can provision Aurora inference endpoints on their own dedicated compute. After provisioned, the user would be able to open the python notebook and run the model to execute weather forecasts on demand. Weather Forecast Outputs are Fused with Existing Data Sources on Microsoft Planetary Computer Pro Aurora’s weather prediction outputs are seamlessly integrated back into Microsoft Planetary Computer Pro, where they are fused with existing public or private geospatial datasets. This makes forecast results immediately accessible for visualization, post-processing, and analysis—such as identifying assets at risk, estimating localized impact, informing operational response plans, or pre-positioning needed assets for quick recovery. By combining AI-driven forecasts with geospatial context, organizations can move from raw predictions to actionable insights in a single workflow. This solution also provides organizations with a centralized platform to store and catalog geospatial data for future traceability. Figure 1 - Aurora + Microsoft Foundry + Microsoft Planetary Computer Reference Architecture Unified Weather Prediction Demonstration This demonstration visualizes the forecast storm track (Figure 2), along with projected damage impact along the storm path and associated coastal surge areas (Figure 3 & 4). This enables users to assess asset exposure, anticipate damage due to winds, pre-position crews, and proactively protect critical infrastructure—helping reduce outage duration, lower operational costs, and improve grid resilience. Figure 2 - Cone of Uncertainty for Aurora Forecast and Actual Hurricane HELENE Track Figure 3 - Swath Impact Layers with Infrastructure Overlay (Transmission Lines) Figure 4 - Swath Impact Layers with Infrastructure Overlay (Substations & Powerplants) Getting Started The python notebook supports tracking of historical storm events, forecasting real-time storm trajectories, and overlaying critical power infrastructure structure data from OpenStreetMap to visualize overlap. To get started, deploy this solution in your Azure environment to begin generating weather forecasts and storm-track predictions. The code and documentation for running this notebook are available in the linked GitHub Repo. Sample output for you to explore are linked within this HTML. For additional resources, visit the following MS Learn pages: Microsoft Planetary Computer Pro Microsoft Foundry The interoperability between ‘GeoAI models + data platform’ extends far beyond weather prediction. It empowers organizations to take control of their geospatial data; to generate actionable insights on their own timeline, and to meet their own specific needs. With Microsoft Planetary Computer and Microsoft Foundry together, organizations will unify their enterprise geospatial data, and unlock its value with powerful, and state of the art AI solutions.

Unified AI Weather Forecasting Pipeline thru Aurora, Foundry, and Microsoft Planetary Computer Pro

05.03.2026 17:19 👍 0 🔁 0 💬 0 📌 0
Preview
Copilot usage metrics now includes user-level GitHub Copilot CLI activity As a follow-up to last week’s release of enterprise-level CLI telemetry, we’re expanding coverage to the user-level. You can now view CLI-specific activity and usage totals in order to: Understand… The post Copilot usage metrics now includes user-level GitHub Copilot CLI activity appeared first on The GitHub Blog.

Copilot usage metrics now includes user-level GitHub Copilot CLI activity

05.03.2026 17:02 👍 0 🔁 0 💬 0 📌 0
Preview
Lock and unlock draft repository security advisories Repository administrators can now lock draft repository security advisories and private vulnerability reports to prevent collaborators from editing advisory content or metadata. When locked, only administrators can make changes; collaborators… The post Lock and unlock draft repository security advisories appeared first on The GitHub Blog.

Lock and unlock draft repository security advisories

05.03.2026 00:28 👍 0 🔁 0 💬 0 📌 0
Preview
Grok Code Fast 1 is now available in Copilot Free auto model selection Grok Code Fast 1 is generally available to Copilot Free plans via Copilot auto model selection. This model is now added to the list of possible models that Copilot might… The post Grok Code Fast 1 is now available in Copilot Free auto model selection appeared first on The GitHub Blog.

Grok Code Fast 1 is now available in Copilot Free auto model selection

04.03.2026 19:57 👍 1 🔁 0 💬 0 📌 0
Preview
Copilot Memory now on by default for Pro and Pro+ users in public preview Copilot Memory is now enabled by default for all GitHub Copilot Pro and Copilot Pro+ users. Previously in public preview as an opt-in feature, Copilot Memory allows Copilot to build… The post Copilot Memory now on by default for Pro and Pro+ users in public preview appeared first on The GitHub Blog.

Copilot Memory now on by default for Pro and Pro+ users in public preview

04.03.2026 13:46 👍 0 🔁 0 💬 0 📌 0
Preview
Unlocking document understanding with Mistral Document AI in Microsoft Foundry Enterprises today face a familiar yet formidable challenge: mountains of documents -contracts, invoices, reports, forms - remain locked in unstructured formats. Traditional OCR (optical character recognition) captures text, but often struggles with context, layout complexity, or multilingual content. The result? Slow workflows, error-prone manual reviews, and missed insights. The post Unlocking document understanding with Mistral Document AI in Microsoft Foundry appeared first on Microsoft Azure Blog.

Unlocking document understanding with Mistral Document AI in Microsoft Foundry

04.03.2026 08:26 👍 0 🔁 0 💬 0 📌 0
Preview
Join or host a GitHub Copilot Dev Days event near you GitHub Copilot Dev Days is a global series of hands-on, in-person, community-led events designed to help developers explore real-world, AI-assisted coding. The post Join or host a GitHub Copilot Dev Days event near you appeared first on The GitHub Blog.

Join or host a GitHub Copilot Dev Days event near you

03.03.2026 23:52 👍 0 🔁 0 💬 0 📌 0
Preview
Instant access incremental snapshots: Restore without waiting Today, we’re excited to introduce instant access support for incremental snapshots of Premium SSD v2 (Pv2) and Ultra Disk, delivering an industry-leading snapshot experience where creation, disk restore, and production-ready performance all happen instantly. The post Instant access incremental snapshots: Restore without waiting appeared first on Microsoft Azure Blog.

Instant access incremental snapshots: Restore without waiting

03.03.2026 23:26 👍 0 🔁 0 💬 0 📌 0
Preview
Dependabot alert assignees are now generally available You can now assign Dependabot alerts to specific users, helping your team track and remediate dependency vulnerabilities more effectively by assigning clear ownership of alerts. How it works From the… The post Dependabot alert assignees are now generally available appeared first on The GitHub Blog.

Dependabot alert assignees are now generally available

03.03.2026 22:44 👍 0 🔁 0 💬 0 📌 0
Preview
Azure Developer CLI (azd): One command to swap Azure App Service slots The new azd appservice swap command makes deployment slot swaps fast and intuitive. The post Azure Developer CLI (azd): One command to swap Azure App Service slots appeared first on Azure SDK Blog.

Azure Developer CLI (azd): One command to swap Azure App Service slots

03.03.2026 22:13 👍 0 🔁 0 💬 0 📌 0
Preview
Email notifications for included usage thresholds GitHub now warns you by email as your included usage approaches its monthly thresholds across Actions, Packages, Git LFS, and Codespaces. GitHub now sends email notifications when your included usage… The post Email notifications for included usage thresholds appeared first on The GitHub Blog.

Email notifications for included usage thresholds

03.03.2026 21:01 👍 0 🔁 0 💬 0 📌 0
Preview
Available today: GPT-5.3 Instant in Microsoft 365 Copilot Today, we’re excited to announce the addition of OpenAI’s GPT‑5.3 Instant to Microsoft 365 Copilot and Microsoft Copilot Studio. Building on GPT‑5.2 Instant, GPT‑5.3 Instant improves the quality of everyday conversations with more reliably accurate responses, stronger and more expressive writing, and more direct, useful answers—helping Copilot provide a useful response when appropriate, rather than defaulting to disclaimers or declining to proceed. For questions that draw on information from the web, with GPT-5.3 Instant, Copilot now delivers clearer, more relevant responses by better synthesizing what it finds online with its own knowledge and reasoning—so answers are less shaped by retrieved content alone and more closely reflect the needs of the task. Together, these improvements help Copilot provide responses that are more focused, relevant, and immediately usable in your work. We are committed to bringing you the latest cutting-edge AI innovation and model choice tuned for work and tailored to your business needs—with the security, compliance, and privacy that you expect from Microsoft. Microsoft 365 Copilot licensed users will receive priority access to GPT-5.3 Instant, while users without a Microsoft 365 Copilot license will have standard access. Learn more about standard versus priority access here Get started today GPT-5.3 Instant begins rolling out today to Microsoft 365 Copilot users with priority access and Microsoft 365 Copilot Chat users with standard access. For agent makers, GPT-5.3 Instant is available in Copilot Studio early release cycle environments. Updates to Thinking will follow soon. Microsoft 365 Copilot chat interface with the response mode menu open, showing Auto, Quick response, Think deeper, and GPT‑5.3 Quick response selected. In Copilot Chat, GPT‑5.3 Instant appears as GPT‑5.3 Quick response in the model selector under More, and in Copilot Studio as GPT‑5.3 Chat. Try it Select GPT-5.3 Quick response in Copilot Chat and try this prompt: In work - Draft a [topic] update for last week using meetings and emails with key decisions. Format to be a skimmable Teams post. Include wins, work in motion, next week’s focus, and any asks I have for the team. Keep it under 12 bullets total Our team will continue to refine the experience based on your feedback as we build the future of work together. Learn more about Microsoft 365 Copilot and Microsoft Copilot Studio and start transforming work with Copilot today. Learn more about GPT-5.3 Instant from OpenAI here. For the latest research insights on the future of work and generative AI, visit WorkLab.

Available today: GPT-5.3 Instant in Microsoft 365 Copilot

03.03.2026 19:56 👍 0 🔁 0 💬 0 📌 0