A Primer for Underwriting Leaders

Thinking beyond the model.

Generative AI reads like a brilliant intern with no memory of your rulebook. Neurosymbolic AI pairs that intern with a structured mind — an ontology and a rule engine — so every decision is auditable, consistent with your guidelines, and defensible to a regulator. This is a working explainer, with a live commercial property submission at the end.

Domain
Commercial Property
Format
Interactive Explainer
Audience
Business & Technical
Reading Time
~12 minutes
01 / DEFINE

What neurosymbolic AI actually is.

Neurosymbolic AI is a hybrid architecture that combines the pattern recognition of neural networks with the logical rigor of symbolic reasoning.

Large language models are neural networks — statistical engines that learn from vast amounts of text. They are brilliant at reading unstructured documents, understanding intent, and producing fluent language. But they reason probabilistically, not logically. Two runs of the same question can produce two different answers, and the model cannot show its work in a way a regulator would accept.

Symbolic AI is the older tradition — knowledge graphs, ontologies, rule engines, and formal logic. It is rigid, explicit, and fully auditable, but it cannot read a broker email or interpret a loss-run PDF on its own.

Neurosymbolic systems use the neural side to perceive — extracting structured facts from messy inputs — and the symbolic side to decide — applying your underwriting guidelines, regulatory requirements, and portfolio rules with deterministic precision. The neural model proposes. The symbolic engine disposes.

01 — NEURAL

Perceive

Extract entities, relationships, and intent from unstructured submissions — emails, ACORD forms, loss runs, inspection reports, news, geospatial imagery.

02 — ONTOLOGY

Represent

Map every extracted fact into a formal domain model: what a Location is, what a COPE characteristic is, how Occupancy relates to Protection and Exposure.

03 — SYMBOLIC

Reason

Apply underwriting guidelines, capacity limits, regulatory constraints, and reinsurance treaty rules as executable logic — with every inference traceable to a rule and a fact.

04 — NEURAL (again)

Explain

Generate human-readable narratives, referral memos, and broker responses grounded in the reasoner's output — never hallucinated, always traceable.

02 / REASON

How the two systems actually think.

The difference isn't about accuracy on any single task — it's about the kind of cognition each system performs. An LLM generates the most plausible next token. A symbolic reasoner evaluates the truth value of propositions against a formal knowledge base. Understanding this gap is the key to knowing where to apply each.

Large Language Model

Statistical next-token prediction

"Based on what I've read, this pattern usually leads to that outcome."

  • MechanismTransformer architecture predicts the next token given the preceding context. Reasoning is an emergent side-effect of scale, not a guarantee.
  • KnowledgeImplicit. Compressed into billions of weights. Impossible to inspect, version, or selectively update without retraining.
  • ConsistencyProbabilistic. The same input can yield different outputs, especially at non-zero temperature.
  • TraceabilityNone. The model cannot cite which training example or which weight produced its conclusion.
  • StrengthUnstructured text, nuance, language fluency, summarization, and intent recognition.
  • WeaknessArithmetic, multi-step logic, rare edge cases, regulatory reasoning, hallucinations under ambiguity.
Neurosymbolic System

Logic over a knowledge graph

"Given these facts and these rules, this conclusion follows — here is the proof."

  • MechanismNeural extraction feeds an ontology; a reasoner applies first-order logic or rule systems (Datalog, OWL, RETE) to derive conclusions.
  • KnowledgeExplicit. Concepts, relationships, and rules live in a knowledge base you can read, version, and edit like source code.
  • ConsistencyDeterministic. Same facts plus same rules always produce the same conclusion. Reproducible across runs.
  • TraceabilityComplete. Every inference chains back to specific facts and named rules — a proof tree ready for audit.
  • StrengthPolicy compliance, capacity rules, multi-constraint optimization, portfolio accumulation, regulatory defense.
  • WeaknessBrittle on unstructured input without a neural front-end. Requires investment in ontology and rule authoring.

Going deeper — what's happening inside each system

+
Inside an LLM

A submission comes in as text. The model tokenizes it and passes the tokens through dozens of transformer layers, each applying attention across the sequence. At each position, the model produces a probability distribution over its vocabulary and samples the next token.

There is no separate "reasoning step." What looks like reasoning — chain-of-thought, for example — is the model generating text that resembles reasoning it has seen in training data. It often gets the answer right because that pattern is well-represented in the corpus. It sometimes gets it confidently wrong for the same reason.

The model has no persistent representation of your underwriting manual. Even if you paste the manual into context, the model's attention to it is probabilistic; there is no guarantee rule CP-TIV-03 will be applied when it should be.

Inside a Neurosymbolic System

The same submission arrives as text. A neural extractor (often an LLM itself, constrained) parses it into structured facts: Location(id=L1, address=..., construction=JoistedMasonry, occupancy=LightMfg, tiv=USD 38M).

These facts are asserted into a knowledge graph built on an ontology. A reasoner — a rules engine or a description-logic inference engine — then evaluates every applicable rule. Rule CP-TIV-03 is represented as executable logic: IF tiv > authority_limit(underwriter) THEN require_referral.

The conclusion isn't generated — it's derived. Every derivation produces a proof: the fact, the rule, and the binding. That proof is what makes the output auditable, consistent, and defensible.

SYSTEM 1 · KAHNEMAN

Fast. Associative. Often right, sometimes dangerously wrong.

System 1 is pattern recognition at speed. It's the brain's autopilot — fluent, confident, and excellent for familiar problems. It's also where cognitive bias lives. When the pattern matches something common, System 1 is fast and accurate. When the situation is ambiguous, novel, or hinges on precise rules, it produces a confident answer that can be materially wrong.

A large language model is System 1 made mechanical — a pattern engine operating at machine scale, with machine confidence.
SYSTEM 2 · KAHNEMAN

Slow. Deliberate. Rule-following, multi-step, defensible.

System 2 is effortful thinking — the kind you use to apply regulations, work through a multi-step calculation, or weigh alternatives under constraints. It's slower and more expensive, but it's also the only kind of thinking that produces an auditable trail of why a decision was made. Underwriting is a System 2 discipline.

A neurosymbolic system operationalizes System 2 — and can apply several distinct modes of formal reasoning, the right one for the question.
REASONING MODES

Different questions demand different kinds of thinking.

Because a neurosymbolic system has an explicit model of the world, it can apply the reasoning mode that fits the question — not just probabilistic association. Each of these shows up somewhere in the underwriting pipeline. The rule engine does deductive work. Classifying a risk is abductive. Pricing involves causal reasoning. Structuring a placement is constraint-satisfaction. Stress-testing is counterfactual.

MODE 01

Deductive

Given facts and rules, derive what must be true. Certainty, not probability. This is what rule engines and description logics do — and what regulators and auditors expect.
In UnderwritingIf TIV > $50M and underwriter authority is $50M, referral is required. No judgment call — a logical consequence.
?
MODE 02

Abductive

Given observations, infer the best explanation. Used to classify — map a NAICS code or an address to the right concept in the ontology, resolve which peril regime applies.
In UnderwritingNAICS 332710 + welding equipment observed → best classification is HighHazard_Welding, not generic light manufacturing.
MODE 03

Causal

Reason about cause and effect, not just correlation. Knowing what drives what lets the system trace pricing loads to their origin — and predict how a change propagates.
In UnderwritingTier 1 location → elevated wind exposure → NWS deductible needed → premium rate load of +12%. Each link traceable.
MODE 04

Constraint-satisfaction

Find a solution that satisfies many constraints simultaneously. Price, capacity, regulatory filings, appetite, treaty terms — all must hold. Solvers find the option that clears every test.
In UnderwritingWhat deductible + sublimit + cession structure keeps retained accumulation < 80% while maximizing margin?
~
MODE 05

Counterfactual

Ask what-if. With a structured world model the system can simulate changes — what happens to the portfolio if this bind goes through, if the treaty renews at different terms, if the book shifts.
In UnderwritingIf we cede 40% via quota-share instead of 25%, retained PML drops by $3.8M — reshaping the bind decision.
03 / ONTOLOGY

An ontology is a shared mental model — for machines.

Think of an ontology as the vocabulary and grammar of your business, written in a form a computer can reason over.

When an underwriter says "a joisted-masonry building in a protected Class 4 town with a sprinkler system in a light manufacturing occupancy," every word is loaded with meaning — and every word connects to other concepts. Construction class implies vulnerability to wind and fire. Protection class modifies that vulnerability. Occupancy shapes exposure. The underwriter's brain holds this web of concepts.

An ontology makes that web explicit. It names the classes (Location, Building, Occupancy, Peril), the properties (TIV, year built, construction type), and the relationships (a Building hasProtection, is exposedTo Perils, isLocatedIn a CatZone).

Without an ontology, the LLM knows the words. With an ontology, the system knows the meaning.

Once the world is modeled this way, rules become composable and inheritable. A rule that applies to "all combustible-construction buildings" automatically applies to anything the ontology classifies as combustible — frame, heavy timber, ordinary. You don't write the rule four times. You write it once, against the concept.

Most importantly, ontologies compose across domains. A location ontology plugs into a peril ontology plugs into a reinsurance ontology. The same structure that supports underwriting supports claims, portfolio management, and regulatory reporting.

Fig. 01 — Minimal Commercial Property Ontology
covers contains hasClass hasUse hasProtection exposedTo inZone hasValue modifies Policy Location Building CatZone Peril Wind · Fire · Flood Construction Frame · Joisted · Fire-Rsv Occupancy NAICS-mapped Protection Sprinkler · PC · FD TIV (USD) Core class Characteristic / enumeration Attribute
I.

Meaning, not matching

An LLM might see "frame construction" and "wood frame" as similar strings. An ontology knows they are the same concept, and knows that concept is classified as combustible under ISO CC-1.

II.

Rules follow concepts

A rule attached to Combustible Construction automatically applies to every subtype — without re-authoring. Your underwriting manual becomes a small, maintainable tree instead of a sprawling document.

III.

Cross-domain consistency

The same Location concept anchors underwriting, claims, portfolio, and reinsurance. One broker address resolves to one entity everywhere — the foundation of clean accumulation and PML analysis.

04 / DEMO

A live commercial property underwriting decision.

A broker submits a mid-market commercial property risk. Below is the actual submission. Then: how the process traditionally runs, how a pure LLM would handle it, and how a neurosymbolic system handles it — not just gating the referral but reasoning across the enterprise ontology to derive a priced, structured quote. The NSAI pipeline runs in five stages: extract facts from the submission, enrich with ontology-driven joins, evaluate underwriting rules, compose an explanation, and finally reason across pricing, reinsurance, portfolio, claims, and regulatory ontologies to produce the actual terms and premium. Every number traceable to its source.

Submission · Meridian Fabrication, Inc.

SUB-2026-047193 · Broker: Hanley & Co.
Named Insured
Meridian Fabrication, Inc.
Coverage
Building + BPP + BI
Effective Date
06 / 01 / 2026
Locations
3 (TX, OK, LA)
Primary Location
Galveston, TX 77551
Construction
Joisted Masonry (1987)
Occupancy
Metal Fabrication (NAICS 332710)
Protection
Sprinklered · PC 4 · Central Station
Total TIV
$ 84,500,000
5-Yr Loss History
2 losses · $ 412k incurred
Prior Carrier
Non-renewed (capacity)
Broker Note
Quick turn requested — bind 05/25

Before introducing any AI, here's the traditional workflow a commercial property underwriter runs on a submission of this size. The process is sound — but it's slow, uneven across underwriters, and heavily dependent on manual interpretation.

STEP 01
Intake & Clearance

Check for conflicts, prior submissions, sanctioned entities. Verify appetite fit.

STEP 02
COPE Assessment

Review Construction, Occupancy, Protection, Exposure for each location.

STEP 03
Loss & Exposure Analysis

Evaluate loss runs, CAT exposure, accumulation vs portfolio limits.

STEP 04
Pricing & Authority

Run rating model, compare to authority grid, determine referral need.

STEP 05
Decision & Terms

Accept, decline, or quote with subjectivities. Issue terms to broker.

Observed Pain Points

Slow, inconsistent, and hard to defend

A submission this size typically takes 5–8 business days across multiple handoffs. Two underwriters looking at the same risk routinely produce different terms. When a decision is challenged — by a broker, a regulator, or internal audit — the rationale often lives in one underwriter's email thread or handwritten notes. Accumulation exposure is checked manually against stale portfolio extracts.

  • Inconsistent application of guidelines across the desk
  • Manual portfolio checks miss aggregation risk in real time
  • Decisions are defensible only where documentation is strong — and documentation is uneven

A pure LLM approach: paste the submission into a prompt along with the underwriting manual, and ask for a recommendation. Fast, fluent, and dangerous to rely on. Here is what the model produced.

LLM · SINGLE-PASS PROMPT temp=0.2 · ctx=128k · 11.4s
01# Prompt: "You are a commercial property UW. Review submission and recommend."
02Analysis: Risk appears acceptable.
03Construction is joisted masonry — generally favorable for fire.
04Occupancy is light manufacturing, which is within typical appetite.
05Loss history shows two claims totaling ~$412k; frequency is low.
06Recommendation: QUOTE with a 5% rate uplift for loss history.
07Suggested Deductible: $25,000 AOP.
08# Narrative delivered in fluent prose with no sourcing
What the LLM missed

Confident. Fluent. Substantively wrong.

The output reads well, but it is materially incorrect on multiple counts that an experienced underwriter would catch — and a regulator would later find. These aren't subtle gaps; they are the kind of errors that become errors-and-omissions claims.

  • Named peril blind spot: Galveston is a Tier 1 named-storm county. The model made no mention of wind or surge exposure. There is no named-windstorm deductible in the recommendation.
  • NAICS mis-classification: 332710 is metal fabrication, not "light manufacturing." Welding operations materially elevate fire hazard. The fire-protective classification is wrong.
  • Authority breach: $84.5M TIV exceeds most individual underwriter's single-risk authority. No referral was flagged.
  • Portfolio blind: No check against existing Gulf Coast Tier 1 accumulation, PML, or reinsurance treaty attachment.
  • Prior-carrier red flag: Non-renewal for "capacity" is often a euphemism. The model accepted the broker's framing without probing.
  • Non-reproducible: Running the same prompt tomorrow with the same submission produces a different recommendation. No audit trail.

The neurosymbolic system runs the submission through four stages: neural extraction into the ontology, symbolic enrichment from external sources, rule-engine evaluation of underwriting guidelines, and neural explanation of the derived decision. Every step is inspectable.

Stage 01 / Neural extraction — submission text parsed into ontology instances
01Policy(id=P1, insured="Meridian Fabrication", effective=2026-06-01)
02Location(id=L1, address="Galveston, TX 77551", primary=true)
03Building(id=B1, locatedAt=L1, yearBuilt=1987, tiv=USD 38.2M)
04Construction(B1, JoistedMasonry) ⊑ ISO_CC_3
05Occupancy(B1, NAICS=332710) → resolves to MetalFabrication
06Protection(B1, sprinklered=true, PC=4, alarmType=CentralStation)
07Loss(P1, count=2, incurred=USD 412k, period=5y)
Stage 02 / Symbolic enrichment — ontology auto-derives and joins to external knowledge
01Location(L1).catZone ← lookup(FIPS=48167) → Tier1_NamedWindstorm · FEMA_AE_Flood
02MetalFabricationHighHazard_WeldingFireHazardClass_4
03JoistedMasonryNonCombustibleExterior · CombustibleInterior
04Portfolio(Gulf_T1_retained) = USD 568.8M / 720M cap → 79.0%
05PriorCarrier.nonRenewalReason="capacity"flag for disclosure review
Stage 03 / Rule engine evaluation — every applicable guideline fires deterministically
Rule Condition Inputs Outcome
CP-APT-01 Named insured must pass clearance & appetite screen Metal Fab · NAICS 332710 In Appetite
CP-TIV-03 If TIV > USD 50M, route to senior authority TIV = USD 84.5M Refer
CP-CAT-12 If location in Tier 1 named-wind zone, require 5% NWS deductible L1 ∈ Tier1_NamedWindstorm Subject To
CP-CAT-18 If FEMA flood zone ∈ {A*, V*}, require flood sublimit or exclusion L1 ∈ FEMA_AE Subject To
CP-HAZ-07 HighHazard_Welding occupancies require hot-work permit warranty B1 ⊑ HighHazard_Welding Warranty
CP-ACC-02 Gulf Tier 1 accumulation must remain < 80% of treaty capacity Post-bind: 79.0% → 80.4% Breach
CP-LOSS-04 Loss ratio < 40% over 5 yr — no load required LR ≈ 11% (est.) Pass
CP-DISC-09 Prior non-renewal requires written disclosure from broker Non-renewal flagged Subject To
Stage 04 / Neural explanation — initial decision narrative grounded in the proof tree
01# Narrative generated from rule firings — every claim cites a rule ID
02Initial screen: REFER · 6 SUBJECTIVITIES · 1 BREACH
03Risk is in appetite [CP-APT-01] with acceptable loss history [CP-LOSS-04].
04Binding at default 50% cession pushes Gulf T1 accumulation to 80.4%, breaching
05the 80% treaty cap [CP-ACC-02] — cession structure must be re-reasoned.
06Quote to include: 5% NWS deductible [CP-CAT-12], flood
07sublimit USD 5M [CP-CAT-18], hot-work warranty [CP-HAZ-07],
08prior non-renewal disclosure [CP-DISC-09], senior UW authority [CP-TIV-03].
Stage 05 / Cross-domain reasoning — compose a priced, placed quote
01# Query enterprise ontology for resolution levers & pricing inputs
02ReinsuranceTreaty(ABC-Gulf-2026).variableBand = [25%, 75%] retention
03PricingOntology.bookRate(MetalFab · PC4 · Tier1) = $0.248 / $100 TIV
04ClaimsOntology.severityCurve(welding, 5y) → blended expected loss
05RegFilingsOntology.texas = NWS ded minimum OK · filing approved
06# Constraint-satisfaction · find retention that clears CP-ACC-02
07solve(retention ∈ [0.25, 0.75] | accumulation_post_bind ≤ 0.80 × cap)
08→ retention = 0.25 (75% ceded) · post-bind = 79.7% ✓
09# Causal pricing derivation · base × loads × credits
10premium = base × (1 + welding + NWS + credit) × retained + commission
11Final recommendation: BIND · 25% RETENTION · NET PREMIUM $104,109
What the neurosymbolic system produced

A decision, a proof, a structure, and a price.

Unlike the LLM, the NSAI system didn't stop at a fluent answer. It caught every compliance flag, diagnosed the accumulation breach with real math, then reasoned across six enterprise ontologies — treaty, portfolio, pricing, claims, regulatory filings, and commission — to find a cession structure that resolves the breach and arrive at a defensible net premium. Every value here has a named source.

  • Caught the CAT exposure through ontology-driven enrichment — not left to the model's attention.
  • Caught the occupancy hazard via NAICS-to-hazard-class inheritance in the ontology.
  • Diagnosed the accumulation breach at default cession with explicit portfolio math — 79.0% + gross add at 50% retention → 80.4%.
  • Resolved it by constraint-satisfaction — flexed retention to 25% within the treaty band, lands at 79.7%, clears the 80% cap.
  • Priced the risk causally — base rate loaded for welding and NWS, credited for protection, netted for cession.
  • Every number traces to a source — the proof tree is the compliance record.
LIVE · WATCH BOTH MODELS RUN

See it for yourself.

Press Run. Both systems receive the Meridian Fabrication submission at the same moment. Watch the LLM sprint to a fluent answer while the neurosymbolic reasoner extracts, enriches, evaluates rules, composes an explanation, and then reasons across the enterprise ontology — deductively, causally, and by constraint-satisfaction — to structure and price the actual quote. Every number traced to a source.

LLM · SINGLE-PASS PROMPT
Generative AI
0.0s
NEUROSYMBOLIC · ONTOLOGY + RULES
Hybrid Reasoner
0.0s

Same submission. Very different thinking.

Side-by-side: what each system produced on this run.
Decision
Quote (wrong)
vs
Priced & placed
Critical factors caught
3 of 8
vs
8 of 8
Reasoning across domains
None
vs
6 ontologies
Audit trail
Prose
vs
Proof tree

LLM-Only Approach

Decision accuracy on this riskWrong
Reproducibility across runsNone
Auditable rationaleProse only
Guideline update costRe-prompt
Portfolio awarenessNone
Regulatory defensibilityWeak

Neurosymbolic Approach

Decision accuracy on this riskCorrect
Reproducibility across runsDeterministic
Auditable rationaleProof tree
Guideline update costEdit rule
Portfolio awarenessReal-time
Regulatory defensibilityStrong