
From signal physics to agentic AI systems — end to end.
Two complementary pillars — classical signal processing and modern AI — applied according to the problem's requirements.
Built on a physics background (PhD, Université Laval), this domain covers mathematical modelling, spectral analysis, and signal extraction in noisy environments. The right approach is chosen for the problem, not the other way around.
Techniques
From classical deep learning (CNN, RNN) to transformer architectures, LLMs, fine-tuning, RAG, and agentic workflows — Paul covers the full 2026 AI landscape. The real power of AI lies in data quality and a well-defined problem.
Techniques
LLMs have opened an entirely new class of algorithms — ones built with words and documents, not just numbers.
Traditional expert systems are a semi-infinite series of if/else rules, painstakingly coded one decision at a time. They work — until reality drifts beyond the scenarios the developer anticipated. The result: brittle logic that overfits the data available at coding time and breaks on anything new.
if (condition_1) → rule_A else if (condition_2) → rule_B else if (condition_3) → rule_C ... × 1000
LLMs, RAG, and embeddings change the game. Instead of encoding every decision as a rule, we can build algorithms that reason over documents, manuals, and domain knowledge directly. The gap between raw numbers and human understanding — the gap that expert systems tried to bridge with thousands of rules — is now closed by models that understand language natively.
The real breakthrough: combining signal processing and numerical analysis with LLM-powered reasoning. Sensor data flows through classical algorithms for extraction and filtering, then passes to an LLM layer that interprets results using domain documents and contextual knowledge. One pipeline — numbers in, decisions out — without the fragile if/else chain in between.
Local models, private RAG, embedding pipelines — built for clients where data sovereignty, compliance, or cost control rule out public APIs.
Public LLM APIs are powerful, but they're not for every problem. When your data can't leave your walls — regulatory, IP, or strategic reasons — we deploy the entire stack on your hardware. Same capabilities, zero data leakage, no per-token billing.
Ollama, llama.cpp, vLLM on your servers or edge devices. Quantized open-weight models (Llama, Mistral, Qwen) sized to your GPU budget. Fine-tuned on your domain when accuracy matters.
Your manuals, specs, tickets, and historical data indexed into a private vector store. The LLM answers questions grounded in your corpus — not the internet. No leakage, no hallucinated policies.
End-to-end ingestion: chunking, embedding, indexing (pgvector, Qdrant, Weaviate). Semantic search, clustering, deduplication — the numerical side of knowledge retrieval, engineered for scale and refresh.
Fully disconnected deployments for defense, healthcare, or regulated manufacturing. Audit logs, role-based access, reproducible inference. The AI capability without the compliance headache.
Why on-premise
Data sovereignty — nothing leaves your infrastructure
Predictable cost — fixed hardware vs. per-token billing
Compliance-ready — HIPAA, GDPR, ITAR, defense contexts
A rigorous four-step process — from problem to production — with no shortcuts.
01
Understand the problem before any tool. Decompose into requirements, constraints, and success criteria.
02
Choose or build the mathematical model or AI architecture suited to the problem.
03
Validate hypotheses with a POC — delivered fast, iterated early, before major investment.
04
Optimization, integration, robustness testing. From algorithm to deliverable system.
Data traceability, label quality, augmentation, and synthetic generation when data is scarce — Paul integrates data management into every AI engagement, not as an afterthought.
Reply within 24 hours.
Paul develops client algorithms with a specialized AI agent infrastructure — not as a gimmick, but as a measurable competitive advantage.
Paul develops client algorithms using a practice built on specialized AI agents — not as a gimmick, but as an engineering infrastructure. The result: prototyping speed and test coverage that are impossible to achieve with traditional development.
70+
specialized agents
20+
orchestration workflows
Days
not weeks
Agents generate candidate implementations from mathematical specs. A working POC in days, not weeks.
Specialized agents generate test suites covering edge cases, numerical stability, and degenerate inputs — before delivery.
A review agent evaluates every implementation against algorithmic best practices: vectorization, complexity, appropriate numerical libraries.
Scaffolding (type annotations, docstrings, logging, configuration) is handled by agents. Paul focuses on the math and physics that cannot be delegated.
Multiple agents operate concurrently — tests, docs, benchmarking, review — while Paul works on the numerical core. Impossible for a single developer working linearly.
The agentic infrastructure delivers full-team code quality at an independent consultant cost. Correctly, quickly, without sacrificing rigour.
Mathematical reasoning, physical modelling, and architecture decisions are not delegated to AI agents. It is the combination of this irreplaceable domain expertise with modern agentic tooling that produces correct, fast, and deliverable algorithms.
Concrete examples from real projects — each answers a specific technical problem.
Extracting weak signals from noisy sensor data — Fourier transforms, lock-in amplifier detection, PCA decomposition to identify relevant physical components.
Object, defect, or pattern detection in industrial or medical image streams, using CNN architectures optimized for latency and limited data constraints.
Spectral processing algorithms (FTIR, radar) for atmospheric contaminant detection and quantification — deployed in industrial environments (e.g. ABB).
Integrating an LLM with RAG to contextualize recommendations from a user database — implemented in masemaine.ca. Includes lightweight fine-tuning and inference pipeline.
Design of multi-agent systems capable of decomposing tasks, calling external tools, and looping on results — applicable to industrial automation and SaaS platforms.
Extraction of physical events from laser speckle signals. Monotonic-function constraints reject transient dips and artifacts; CNN denoising fuses multiple feature channels simultaneously — replacing brittle threshold cascades with a model that learns the full noise structure.
Three specific differentiators — not a generic biography.
A PhD in physics (nanophotonics, Université Laval) is not just a title — it is training in modelling complex phenomena where approximations have measurable consequences. Every algorithm is grounded in physical understanding of the problem, not just a library call.
15+ years of applied engineering across photonics, medical, industrial, and energy sectors. Algorithms don't stay in a notebook — they are integrated, tested under real constraints, certified where required, and deployed. This is not consulting that stops at the prototype.
Paul combines irreplaceable domain expertise (physics, applied mathematics) with an agentic development infrastructure of 70+ specialized agents. The result: the rigour of a physicist with the speed of a full team. Neither alone achieves this.
Describe your challenge — Paul replies within 24 hours with an initial assessment.