Autonomously and independently evaluate deficiencies when interacting with a range of technologies and leveraging knowledge of these deficiencies to improve future practice.

Brief: Coursework II – Set Exercise [75%] Submission Deadline:       

8th December 2025 Feedback Date:       

13th January 2026

Module Title: Intelligence Engineering and Infrastructure

Module Code: COM774 (79166)

Semester (s) Taught: One

Course / Year Group:  MSc CS/AI/AI(CUQ)/7

Coursework / Exam Weighting: 75% (of Total coursework)

Coursework Assessment

This module is assessed by two pieces i.e. coursework CW1 and CW2. 

Coursework II is explained in the document as follows:

In Coursework CW2, the focus will shift to the remaining stages of the MLOps workflow. This includes Model Development, where machine learning models are designed, trained, and validated; CI/CD, which automates testing and integration of code, data, and models; and Deployment, where models are moved into production environments. It also covers Monitoring, to track performance and detect issues such as data drift; Retraining, to update models with new data; and Governance, which ensures compliance, transparency, and accountability across the entire lifecycle.

Related Learning Outcomes:

  1. Autonomously and independently evaluate deficiencies when interacting with a range of technologies and leveraging knowledge of these deficiencies to improve future practice. 
  2. Appraise, select and autonomously apply skills to leverage a range of machine learning paradigms.
  3. Demonstrate the ability to critically appraise meant by Intelligence engineering and infrastructure and how a variety of processes and paradigms may be applied to address the challenges it presents.

Students will be set an individual exercise where they will be expected to utilize the previously identified dataset in CW1 which can be further used to produce a machine learning model to address a problem. Students will then implement a solution to the problem using technologies and techniques covered in the module.

Requirements: 

Specifically for this exercise, students are required to perform the following 3 tasks.

Task 1: Solution Design

Using the dataset identified in CW1 (e.g., human activity recognition dataset), design a solution that applies MLOps processes and technologies.

Your design should include:

  • Use of the pre-selected dataset from CW1.
  • Appropriate data versioning and storage platforms.
  • Training a machine learning model and evaluating its performance across at least two iterations of the pipeline, using tools for managing and tracking experiments.
  • Evaluation/testing methods to detect performance regressions.
  • Consideration of scalability and deployment constraints.

Task 2: Implementation, Deployment, and Testing 

  • Implement and deploy the designed solution using the technologies taught in the module.
  • Consider limitations of free cloud tiers and potential costs.
  • Important: If using cloud services, you must use the platform covered in the module’s teaching materials.

Task 3: Presentation & Demonstration 

Prepare a slide deck (approx. 12 content slides) containing an embedded 5-minute video demonstration.

  • Slide 0: Title slide (project name, one-line description, student name, student number).
  • Slides 1–2: Problem discussion and relevant MLOps considerations.
  • Slides 3–6: Solution design and development process.
  • Slides 7–8: Key features (e.g., model testing, evaluation methods).
  • Slides 9–10: Limitations and scalability evaluation.
  • Slide 11: Recorded demonstration of solution functionality.
  • Slide 12: Provide concluding comments and a properly formatted reference list.

The video should:

  • Clearly demonstrate the solution.
  • Showcase the use of MLOps tools, testing, and deployment.
  • Provide evidence of decisions and processes referenced in the slides.

 Assessment Criteria:

Your submission will be assessed against the following criteria: 

  1. 1.  Problem Definition and Discussion (10%)
  • Quality of the description of the overall problem.
  • Justification of why scalable intelligence engineering is needed.
  • Critical appraisal of relevant MLOps and intelligence engineering patterns.
  1. 2.  Overview of the Technical Solution Developed (15%)
  • Justification for chosen technologies and design.
  • Consideration of alternative technologies.
  • Incorporation of scalable elements and design patterns.
  • Quality of solution architecture and diagrams. 
  1. 3.  Testing Approaches (20%)
  • Integration of testing into the solution.
  • Use of appropriate metrics.
  • Application of testing to model evaluation and pipeline refinement.
  1. 4.  Performance Evaluation and Scalability (20%)
  • Assessment of solution limitations.
  • Strategies to address weaknesses.
  • Evaluation of scalability across deployment environments 
  1. 5.  Concluding Comments, References & Presentation (10%)
  • Quality of reflection on functionality, limitations, and applicability.
  • Accuracy and formatting of references.
  • Professionalism of presentation (structure, clarity, formatting, figures).
  1. 6.  Video Demonstration (25%)
  • Functionality of the solution.
  • Clarity and quality of demonstration.
  • Evidence of using scalable intelligence engineering infrastructure. 

Note: Total: 100% (equivalent to 75% of module mark)

Submission Guidelines:

Prepare your presentation slides

  • Create your presentation slides in PowerPoint format (.pptx).
  • Ensure slides are clear, well-structured, and support the flow of your video presentation.

Record your video presentation

  • Record a 5-minute video presentation incorporating your PowerPoint slides.
  • The video should clearly explain the complete MLOps workflow, including the data pipeline (covered in CW1), model development, continuous integration and deployment (CI/CD), deployment, monitoring, retraining, and governance.
  • Ensure both audio narration and visuals (slides, code examples, or demonstrations) are clear and professional.

Combine slides and video

  • Save your presentation as a single file that includes both your video recording and slides.
  • Check playback to ensure that audio, visuals, and timing work correctly.

Final checks

  • Verify that your submission is complete, professional, and within the time limit.
  • Ensure your name, student ID, and module code are included on the first slide.
  • Confirm that the file format is accepted (PowerPoint with embedded video). 

Upload to Blackboard

  • Log in to Blackboard and navigate to the Assessment 1 submission folder.
  • Upload your final presentation file before the submission deadline.
  • Double-check that the correct version has been uploaded.

Plagiarism and academic integrity

  • Ensure your submission complies with the University’s plagiarism policy.
  • Any use of external sources, datasets, or code must be appropriately referenced.

Note: According to Ulster University Assessment Code of Practice, where submitted work exceeds the agreed assessment limit, a margin of up to +10% of the work limit will be allowed without any penalty of mark deduction. If the work submitted is significantly in excess of the specified limit (+10%), there is no expectation that staff will assess the piece beyond the limit or provide feedback on work beyond this point. Markers will indicate the point at which the limit is reached and where they have stopped marking. A mark will be awarded only for the content submitted up to this point. No additional deduction or penalty will be applied to the overall mark awarded. The student is self-penalising as work will not be considered/marked. 

N.B. Students should be aware of the plagiarism policy of the University and submit their coursework in accordance with this. 

N.B. The students are required to implement this solution using the concepts and techniques which were the focus of the teaching materials in this module. This may broadly have a focus on a hosted/cloud native design or a containerised solution. It is recommended that students appraise both these design approaches in their slides.

References

[1] “Ulster University Student Guide.” [Online]. Available: https://www.ulster.ac.uk/connect/guide.

[2] Academic Integrity and Plagiarism

(https://www.ulster.ac.uk/student/exams/cheating-and-plagiarism)

Appendix A: Rubrics: COM774 – Assessment Levels Coursework II 

Criteria

(100%)

Fail

(0-49%)

Pass

(50-59%)

Commendation

(60-69%)

Distinction

(70-100%)

 Problem Definition and Discussion (10%)

Little or no description of the overall problem was provided, poor or no justification of why a scalable                  intelligence engineering needs to be adopted.

Moderate description of the overall problem was provided, adequate               justification presented of why intelligence engineering needs to be developed.

Adequate critical appraisal of the use of intelligence engineering and related patterns.

Good description of the overall problem with a good justification of why intelligence engineering needs to be developed.

Strong critical appraisal of the use of intelligence engineering related patterns and architectural components.

Excellent description of the overall problem with excellent justification of why intelligence engineering needs to be developed.

Outstanding appraisal of the use of intelligence engineering related patterns and                  architectural components

Overview of the technical solution developed (15%)

Justification for the choice of technology applied to the problem    was    minimal.

Design was poorly informed and did not incorporate many scalable   native   elements.

No meaningful solution architecture was presented.

The technology used to produce the solution was appropriate given the development problem. 

Moderate effort was made to incorporate scalable intelligence  enabling components. The design was satisfactorily informed by scalable  design  patterns.

An architectural diagram of the developed solution was presented.

The technology used to produce the solution was carefully examined and logically chosen – given the  development problem.

Alternative technologies were examined and excluded accordingly. A wide range of scalable components were incorporated into the solution.

The solution architecture was documented well incorporating control flows and software architecture diagrams.

The technology used was exhaustively examined and critically appraised following a logical selection process.

Alternative technologies were examined and an excellent rationale for exclusion was provided. A wide and comprehensive range of scalable

components were incorporated into the solution.

The solution architecture was very detailed documented incorporating outstanding flow control and

software            architecture diagrams

 

 

 

 

The design of the solution was considered and justified through scalable

design patterns.

The design of the solution was well justified through scalable design patterns

 An overview of testing approaches within the developed solution (20%)

Minimal or no testing was incorporated into the final solution.

Potentially, a partial incorporation of testing occurred but not fully implemented.

Good testing was integrated in a meaningful manner.

The application of the testing to a specific aspect of the model(s) produced was justified given the context of the problem and will provide a more compelling experience for end users.

Testing was integrated an informed manner with a range of metrics considered and produced. 

The outcome of this testing was used to inform automatic ranging mechanisms in within the intelligence pipeline.

Testing was integrated an excellent manner with a range of metrics considered and produced.

 The outcome of this testing was used to inform automatic  ranging mechanisms in within the intelligence pipeline.

The output from this testing was then applied to further processed to better produce an  intelligence  pipeline.

An assessment of  the solutions performance and evaluation of its ability to effectively deliver machine learning artefacts across    a diverse range of deployment environments. (20%)

The limitations of the solution were not enumerated nor discussed adequately.

The solution produced a minimum amount of artefacts and scalability across deployment scenarios was not articulated. .

A solid appraisal limitations of the solution was produced with some awareness of how to remedy these presented.

Scalability of intelligence to candidates deployments was partially catered for in the solution.

An informed appraisal of the limitations of the solution were presented. Strategies to address these were presented.

Scalability was well catered for with limited targeted deployment environments realised.

A broad appraisal of the limitations of the solution were presented. 

Exemplary strategies to improve these were presented.

Scalability was well catered for with multiple targeted deployment environments realised.

 

Concluding comments, References     & Overall Presentation (10%)

Limited reflection was applied to the solution, its functionality, limitations and potential applicability. 

No reference section and/or no text references included or minimal use of referencing with inappropriate formatting.

Presentation of the paper does not follow the standard conventions outlined in the description, figure titles, numbers might be missing and there might be irrelevant figures

or narratives in the work

Meaningful reflection was applied to the solution, its functionality, limitations, and potential applicability.

Insightful reflection was applied to the solution, its functionality, limitations and potential applicability

Weaknesses were identified  and improvements         were suggested.

 

 Video Demonstration. (25%)

The solution performed poorly ordidn’t operate at all.

There was limited evidence provided of the use of scalable intelligence infrastructure.

The solution functioned moderately well.

Implementation issues may have been present but were deemed tolerable.

Some evidence of the use of scalable intelligence infrastructure was presented.

The solution performed well and had acceptable implementation issues. 

Advanced techniques or functionality has been demonstrated.

It was evident that the solution leveraged a range    scalable intelligence infrastructure technologies,

The solution performed well and had minimal implementation issues.

Advanced techniques or functionality has been demonstrated.

It was evident that the solution leveraged a diverse range  of scalable intelligence infrastructure technologies and mechanisms.

100% Plagiarism Free & Custom Written, Tailored to your instructions