I Used AI to Make a Production System Measurable in Under 30 Minutes

AI generated - mind blown

Not a Demo. A Real System.

I used AI to design, build, deploy, and generate measurable results in a production system in under 30 minutes.

  • Not a prototype.
  • Not a demo.
  • A real platform. With real users.

And importantly — under real constraints.

This isn’t a story about AI generating content.

It’s about what happens when you apply AI to a system that already exists, already works, and already has users — but lacks visibility.

 

The Context

For over six years, I’ve been building and operating a platform called WhereWeLearn.

It’s a global, charity-led initiative designed to help people discover and organise free educational content.

The platform is built on the LEAST engine (Linking Educational And Social Technologies), a production system that has evolved over 8+ years without a framework, using:

  • server-rendered PHP
  • domain-based libraries
  • global state
  • a centralised database abstraction layer

This wasn’t accidental.

The system is designed to be:

  • stable
  • understandable
  • controllable

And importantly — usable in the real world.

 

The Constraint (This Changes Everything)

WhereWeLearn operates under a strict constraint:

It does not promote content.

No marketing.
>No optimisation.
>No algorithmic bias.

This is by design.

As a charity-led platform, it must remain:

  • neutral
  • non-commercial
  • unbiased in how content is surfaced

Despite that, over time the platform generated:

  • 134,000+ engagement events
  • users across 130+ countries
  • entirely organic discovery

Which creates an unusual situation:

👉 A system with real usage — but no structured way to measure it.

 

The System Behaviour

Delivering on the strategic learning goals, firstly each lesson created becomes:

  • an indexable asset
  • part of a structured learning graph
  • connected to materials and related lessons

Specifically the platform relies on:

  • sitemap accuracy
  • OpenGraph integration
  • internal linking
  • distributed entry points

Users don’t arrive through a homepage.

They arrive:

  • via search engines
  • directly into materials
  • inside specific learning contexts

In effect, this is programmatic SEO without marketing.

 

The Problem

The system worked well.

But I couldn’t answer basic questions:

  • Are users engaging deeply or bouncing?
  • Are they following learning paths?
  • Is the system improving over time?

There was no meaningful feedback loop.

Which means:

The system could evolve — but not intelligently.

 

The Intervention

Instead of rebuilding anything, I introduced two things:

 

1. AI-Assisted Engineering

Using tools such as Anthropic Claude Code and ChatGPT, I:

  • designed a measurement model
  • implemented a reporting engine
  • deployed it into production

Time from idea → live system:

Under 30 minutes

 

2. A Measurement Layer (Without Tracking Users)

Rather than introducing cookies or heavy analytics, I extended the existing audit system through AI and measurable requirements.

The principle:

Track behaviour, not people.

This enabled:

  • session depth approximation
  • lesson vs material engagement
  • learning flow tracking
  • bot filtering
  • time-based measurement

All within the existing architecture.

 

The First Output

Furthermore within minutes of deployment, the system produced measurable data:

  • Average session depth: 7 pages
  • Lesson engagement: 26.1%
  • Learning flow rate: 8.3%
  • AI-driven traffic: 0% (baseline confirmed)

 

 

Interpreting This Properly

This is early data.  Consequently it needs to be treated carefully.

For example:

  • The session depth was generated during system testing (not yet real user behaviour)
  • Traffic volume is too small to draw conclusions
  • Learning flow requires more time to stabilise

But one signal is already meaningful:

 

Lesson Engagement Is Increasing

Historically:

  • ~3% (2024)
  • ~17% (2026 YTD)

Initial measured value:

  • 26.1%

Hence even allowing for small samples, the direction is clear:

Users are moving from passive consumption to structured learning.

 

 

What Actually Changed

This is the key point.

The platform itself didn’t change.

No redesign.
>No feature expansion.
>No new content strategy.

What changed was:

1. The System Became Observable

Before:

  • behaviour unknown

After:

  • behaviour measurable

 

2. The Feedback Loop Collapsed

Before:

  • weeks to understand impact

After:

  • minutes

 

3.  AI Became an Activation Layer

Not replacing engineering.
Not replacing systems.

But enabling:

  • faster iteration
  • better decisions
  • measurable outcomes

 

What Didn’t Change

This matters just as much.

  • The platform remains neutral
  • No promotion occurs within WhereWeLearn
  • No user tracking was introduced
  • Governance constraints remain intact

Activation happens externally.
Measurement happens internally.

The system remains trusted.

 

What This Means

Furthermore this approach isn’t specific to this platform.

It demonstrates something broader:

AI doesn’t need to replace systems to be valuable.
It needs to make them understandable.

Equally once a system is measurable:

  • decisions improve
  • iteration accelerates
  • outcomes become visible

 

What Comes Next

This is still the baseline.

The next step is controlled activation:

  • introducing AI-driven content externally
  • directing traffic through philipalacey.com
  • measuring the impact on behaviour

And most importantly:

Observing what actually changes — and what doesn’t.

 

Final Thought

Most discussions about AI focus on capability.

In practice, what matters is:

  • where it is applied
  • what constraints exist
  • whether impact can be measured

In this case:

AI didn’t replace the system.
It made the system measurable.

And that’s where real change begins.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.