Go beyond the AI hype to deliver measurable value
Expand your AI capabilities with a training model that empowers professionals to deploy real-world, responsible AI systems. By month 6, learners are delivering production-ready ML pipelines that drive measurable business impact. With world-class academic leadership from Imperial College London and expert coaching, this programme creates confident AI engineers who drive change, reduce risk, and unlock innovation.
Business impact This programme will:

Accelerate real-world AI engineering

Reduce model risk and cost

Upskill, innovate & retain talent

Empower your people
Programme summary
- Duration: 16 months + 4 months EPA
- Delivery: Workshops + hands-on projects, technical coaching, Imperial masterclasses
- Who it’s for: Technical professionals ready to take ownership of end-to-end AI systems
- Delivered in collaboration with Imperial College London and supported by Microsoft
- Facilitators: Experts in AI and behaviour change who guide learners to apply practical AI skills in real-world business contexts
Programme curriculum
Unit 1: How AI changes our world
This opening unit sets the scene for modern AI: how machine learning is reshaping organisations and the legislation that governs responsible use. Learners gain the risk-aware ethics-first mindset needed to champion safe, secure AI adoption. They also begin their deep dive into gathering business and user requirements, learning the significance of change management, quality assurance, sustainability and compliance.
- Workshop 1: Interactive end-to-end AI pipeline demonstration and requirements analysis with prompt engineering for a security-first business use-case
- Task: Draft a compliant AI wireframe and push it to the internal repository with an accompanying risk-assessment ticket
Unit 2: Immediate impact with Machine Learning
We empower our learners by showing them how to scope, prototype and evidence “quick-win” ML services designed with commercial KPIs in mind. Our learners start looking at worked examples of Mathematics in the workplace, used to optimise performance metrics in the business context.
- Workshop 2: Collaborative Engineering With AI: A Rapid MVP Delivery Pipeline Using Public Models
- Task: Re-use a containerised Proof-of-Concept ML model to create a prototype solving a real-world business problem, produce an ROI snapshot for stakeholders; pitch the defined saving or uplift
Unit 3: Programming for intelligent products
A hands-on dive into Coding with Python, version control and core ML libraries. Emphasis is placed on clean code, test-driven development and reusable components that underpin production AI systems. Learners explore parallelism, algorithmic thinking and best programming practices.
- Workshop 3: Pair-programming & debugging clinic for fun and profit
- Task: Refactor legacy code into a reusable, unit-tested ML module and simulate interaction with a Software Engineer
Unit 4: Machine-Learning data environments
This unit covers data sourcing, cloud pipelines, secure cloud storage and cybersecurity best practices, linking technical design to cost, scalability and compliance. Learners create architecture diagrams that balance performance with governance. Learners can use our booster modules to brush up on networking and other relevant knowledge.
- Workshop 4: Architecting secure pipelines with IaC (Infrastructure-as-Code)
- Task: Produce a model for a proposed data flow and risk it
Unit 5: MLOps and production
Learners master CI/CD, containerisation and live-model monitoring. The focus is on deploying robust pipelines and instituting incident-response procedures that keep AI services healthy. We will focus on model lifecycle management, staging, experiment tracking and live monitoring.
- Workshop 5: End-to-end AI/ML deployment and orchestration
- Task: Create an MLFlow-tracked workflow. Ship an orchestrated model to a staging environment and set alert thresholds for service management
Unit 6: Essential maths for ML engineers
We ease our learners into understanding vectors, matrices, statistics and optimisation in a practical, code-first knowledge/skills/behaviours unit. Learners connect mathematical theory to real ML tuning problems and performance metrics.
- Workshop 6: Metrics and model-tuning lab (metric-driven optimisation)
- Task: Compare optimiser performance on a real dataset and simulate collaboration with a data scientist
Unit 7: Supervised Machine Learning
Regression, classification and evaluation techniques take centre stage. Bias-variance trade-offs and model-selection strategies prepare learners for production scenarios. We also cover computer vision and evaluation strategies.
- Workshop 7: Classification in action: production-grade classifiers
- Task: Train, evaluate and document a supervised model for a business dataset. Containerise it and publish an API with contracted tests and version tags
Unit 8: Feature engineering and testing
From data cleaning to feature extraction and stability testing, this unit shows how strong features make or break model performance. We teach robust feature pipelines, drift checks and automated unit tests.
- Workshop 8: Influencing model success with model drift and data drift safeguard builds
- Task: Build a feature pipeline with automated checks. Create a feature store script with built-in monitoring and push it to the registry
Hackathon event
Mid-programme hackathon team challenge to deliver an end-to-end ML solution onto a sandbox cluster.
Unit 9: How machines learn: unsupervised, ensemble & reinforcement
Clustering, semi-supervised, ensemble and reinforcement learning are explored here, with an eye on when each approach delivers business value.
- Workshop 9: Putting it all together: model selection play-off
- Task: Compare three unsupervised algorithms on the same dataset and justify the preferred choice. Publish a comparative dashboard and recommend a production candidate
Unit 10: Neural networks for deep learning
Learners design, train and debug neural networks, weighing GPU cost against accuracy. In this unit, popular architectures such as CNNs, RNNs and transformers, are demystified.
- Workshop 10: Build, train and benchmark a neural net
- Task: Tune hyper-parameters to beat a published baseline on a structured dataset
Unit 11: ML optimisation techniques and strategies
Transfer learning, fine-tuning and pruning help learners squeeze extra performance from models while keeping costs down.
- Workshop 11: Optimising a pre-trained model with Hugging Face
- Task: Adapt a public model to a novel dataset and log the improvement curve
Unit 12: Solving big business problems with AI
Focus moves to NLP and domain-specific AI solutions. Learners align technical deliverables with strategic outcomes, commercial goals and stakeholder needs.
- Workshop 12: Sentiment analysis for commercial insight
- Task: Build a text analytics service and integrate it with a Business Intelligence platform to produce a business impact memo informing strategy
Unit 13: LLMs and GenAI deployments
Retrieval-augmented generation, prompt-engineering and scalable LLM hosting are covered, highlighting opportunities and risks of GenAI, covering RAG patterns and secure GenAI APIs operating in agentic architectures.
- Workshop 13: Hands-on GenAI deployment: rolling out an agentic LLM service
- Task: Deploy a small API-enabled RAG-powered service and quantify the most relevant metrics for optimisation
Unit 14: Thinking inside the box: explainable AI
Fairness, visualisation and explainability (XAI) techniques help learners communicate model behaviour to technical and non-technical audiences.
- Workshop 14: Explainability dashboard sprint and user-centric critical analysis
- Task: Produce a SHAP-based report and accompanying stakeholder slide deck
Unit 15: Into the future: project work and innovation
Learners scope their EPA project, practise stakeholder engagement and map career pathways, integrating all previous skills. We focus on integrating vs. decommissioning legacy systems, horizon scanning with technology scorecards, managing innovation with roadmaps and honing the learner’s research skills.
- Workshop 15: Project strategy & stakeholder masterclass
- Task: Finalise capstone documentation and evaluation
End point assessment
The end point assessment will be the final assessment of knowledge, skills, and behaviours developed throughout the programme. Learners will be checking their CPD answers, reflecting on their learning journey, and practicing professional discussion prep. Through refining their existing artifacts and practicing the assessment methods, we will prepare learners not just for the EPA but also for their ongoing professional development.
World-class thinkers, mentors and educators Expert-led AI coaching
Delivered by experts – is at the heart of the Corndel experience.
People, not platforms, drive AI success. That’s why learners receive expert-led AI coaching from Professional Development Experts (PDEs), who blend technical knowledge with human-centric coaching. Our coaching model empowers individuals to embed AI into their workflows and lead change confidently.
Why Corndel for your AI productivity skills?
Empowered learners – Our experiential, coaching-led model ensures AI skills are embedded and adopted, not just understood.
Human-first – Ethics, privacy and sustainability are woven through every unit, aligning with the EU AI Act.
Real-world impact – At our ‘Double Your Productivity with AI’ event, 94% of professionals said they would immediately apply what they learned.
Government funded – Utilise your Apprenticeship levy to cover the full cost of this programme.
