report:prm

This chapter will provide an overview of the project, addressing scope, time, cost, quality, communication, project plan, sprint scrums, and sprints.

The project scope is limited to developing a POC of the smart bottle. The technical focus will be on reading certain minerals from the water and enabling communication with the app. The prototype will be tested in a controlled environment and is not intended for full deployment in a real-life operating environment.

In addition to the technical prototype, the project will include a full-scale report on how the bottle should look and function. The report will include recommendations for future development, deployment plans, dataset improvement, integration opportunities, and potential risks.

Project Start: 6 of March 2026

Team Preparedness: The team members should have the required knowledge and skills in configuring ESP32, marketing, ethics, and all the chapters mentioned in the report. Preparatory training or upskilling may be necessary if the team lacks specific expertise.

Stakeholder Communication: Establishing effective communication channels with stakeholders, including the client and end-users, is a precondition. Clear communication protocols should be in place to gather feedback and requirements.

Risk Assessment and Migration Plan : The project needs to perform a risk assessment beforehand to pinpoint potential risks and create plans to manage them. This prepares the project to handle unexpected challenges effectively.

Test Environment: It is crucial to have a testing environment for thorough application testing before deploying it. This testing environment needs to closely resemble the final deployment environment.

Figure 1: TRAQUA SCOPE

This subchapter underlines the deadlines that must be met. Documenting key milestones and linking them to specific deadlines is crucial for clarity, accountability, and progress tracking. It ensures that teams stay aligned, allows for the early identification of potential risks, and enables timely adjustments. A well-structured timeline enhances efficiency and significantly increases the likelihood of project success.

Project Duration: 2026-02-23 → 2026-06-25 (123 days / ~17.5 weeks)

# Date Milestone Days from start Risk
—————–—————–——
2026-02-23 Project start 0
1 2026-02-28 Choose top 3 projects 5 Low
2 2026-03-11 Upload black box diagram and Structural Draft 16 Medium
3 2026-03-18 Upload the List of Components and Materials 23 High
4 2026-03-21 Define Project Backlog, Global Sprint Plan, Initial Sprint Plan and Release Gantt Chart 26 Medium
5 2026-03-25 Upload System Schematics & Structural Drawings + cardboard scale model 30 High
6 2026-04-12 Upload Interim Report and Presentation 48 Medium
7 2026-04-16 Interim Presentation + feedback 52 Low
8 2026-04-22 Upload 3D model video 58 Medium
9 2026-04-29 Upload final List of Materials (local providers, price, VAT, transportation) 65 High
10 2026-05-02 Upload refined Interim Report (after feedback) 68 Low
11 2026-05-13 Upload packaging solution 79 Medium
12 2026-05-27 Upload Functional Tests results 93 High
13 2026-06-13 Upload Final Report, Presentation, Video, Paper, Poster and Manual 110 High
14 2026-06-18 Final Presentation + Individual Discussion + Assessment 115 Medium
15 2026-06-23 Wiki/report/paper corrections, refined deliverables, printed poster/brochure/leaflet 120 Medium
16 2026-06-25 Prototype demonstration, submit prototype and user manual 123 High

Risk legend:

  • Low — Well-defined task, short turnaround, low dependency on external factors.
  • Medium — Moderate complexity or dependency on prior deliverables; recoverable if delayed.
  • High — Blocks downstream work, depends on external factors (suppliers, hardware, feedback), or has cascading consequences if missed.

High-risk rationale:

  • List of Components (2026-03-18) — drives all sourcing and budgeting downstream.
  • System Schematics + scale model (2026-03-25) — physical deliverable, hardware/material dependency.
  • Final Materials List (2026-04-29) — depends on supplier responses, pricing, VAT, shipping lead times.
  • Functional Tests (2026-05-27) — prototype must be working; bugs/hardware failures can cascade.
  • Final deliverables bundle (2026-06-13) — largest single upload (report, presentation, video, paper, poster, manual); coordination-heavy.
  • Prototype demonstration (2026-06-25) — final, non-recoverable; live hardware failure = project failure.
Project Budget and Cost Management

The project budget was defined specifically for the development of a single smart water bottle prototype, with the 100 € allocation covering only the material and component costs required for one unit. The largest expenses were associated with electronic components, sensors, and structural materials. Key elements included the microcontroller, TDS sensor (for water quality), FSR406 pressure sensor (for water level), and LIS3DHTR accelerometer (for motion detection). Additional components such as MOSFETs and supporting circuitry were also required to ensure safe and reliable operation.

Mechanical elements included the bottle, UV-C protection materials, and mounting components. Smaller items such as wires, fuses, and prototyping boards were not included in the budget, as they were already available in the university laboratory.

Budget Management

The budget was carefully managed throughout the project lifecycle. Multiple Portuguese suppliers were evaluated to achieve a balance between cost, quality, and delivery time. Where possible, components were sourced from a single supplier to minimize shipping costs. A deliberate decision was made to avoid ordering from China, improving delivery reliability and lead times at the expense of slightly higher costs.

In some cases, sourcing from multiple suppliers resulted in increased shipping expenses. Although lower-cost alternatives were available, the team prioritized components that best satisfied the technical requirements and overall system design. The approach focused on maintaining performance while controlling costs where feasible.

Cost Analysis

The final prototype cost reached 155.25 €, exceeding the initial 100 € budget by 55.25 € (approximately 50 %). This variance was mainly due to shipping costs, VAT inclusion, and slight underestimations during the planning phase. Despite this, the deviation remained relatively small and acceptable for a prototype. These costs will be cut down once we go into mass production. A reduction of at least €70 can be expected

Conclusion

Overall, the budget was effectively controlled, with only a modest increase from the original 100 € estimate. All critical system requirements were successfully achieved. The project highlights the importance of appropriate component selection, supplier management, order consolidation, and leveraging available resources to optimize costs.


Mechanical Components (Per Bottle)
Name Description Link Quantity Unit Price (€)
Plastic Bottle Body of the prototype IKEA.pt 1 2.50
Aluminium foil Reflective, waterproof, isothermic Continente.pt 1 1.50
Activated carbon filter Filters chlorine & improves taste Joom.pt 1 12.40

Electrical Components (Per Bottle)
Name Description Link Quantity Unit Price (€)
TDS sensor Measures conductivity in the water Mauser.pt 1 20.59
MOSFET Works as a switch for the voltage booster Mauser.pt 1 1.14
Battery Rechargeable, 3400 mAh, 3.7 V Li-Ion battery Mauser.pt 3 14.60
BMS Protects, balances and manages charging of the batteries Mauser.pt 1 4.23
Battery holder Holds the batteries and makes battery changing easy Mauser.pt 3 0.65
Charging port DC port that connects to the BMS module Mauser.pt 1 0.92
Buck converter Step-down for microcontroller (12 V → 5 V) Mauser.pt 1 1.89
Magnetic reed switch Switch for the base of the bottle Mauser.pt 1 2.10
Fuse Glass fuse 1 A, 5×20 slow blow Mauser.pt 1 0.18
Fuse holder Cylindrical fuse holder with threads Mauser.pt 1 0.57
Breadboard Protoboard 50×70 for the prototype circuit Mauser.pt 1 0.95
1.1 mm wire Wiring for UV-C light (AWG26) Mauser.pt 1 1.70
Accelerometer Senses movement and if the bottle is upright Kiwi-electronics.com 1 9.53
UV-C LED module Sterilizes the water Fruugo.pt 1 8.95
Pressure sensor Tracks the water amount Fruugo.pt 1 7.95
Temperature sensor Measures temperature and humidity Fruugo.pt 1 7.95
Breadboard kit Includes wires, resistors, LEDs, etc. Joom.pt 1 11.90
Microcontroller ESP32 DEVKIT 1, central control unit Joom.pt 1 7.30
Charger 3S 18650 charger, 12.6 V, 2 A Joom.pt 1 2.50

Estimated Cost per Prototype
Category Estimated Cost (€)
Mechanical Components 6.75
Electrical Components 148.50
Total Estimated Cost per Bottle 155.25
Initial Budget 100.00
Budget Difference +55.25

Personnel Costs

In addition to the material cost per prototype, the development of the system involved personnel costs associated with a team of six engineers. Each engineer worked an average of six hours per day over a four-month period, excluding weekends. This corresponds to approximately 88 working days per engineer, or 528 hours per person, resulting in a total of 3,168 working hours for the entire team.

Assuming an average hourly salary of 14 €, the total personnel cost for the development phase is estimated at 44,35 € . This value reflects the full design, development, integration, and testing process. While not included in the per-unit prototype cost, it represents a significant investment that would typically be amortized across units in a large-scale production scenario.


Total Estimated Cost
Category Value
Team Size 6
Working Period (Months) 4
Average Working Days / Engineer (d) 88
Average Hours / Day (h/d) 6.00
Total Hours / Engineer (h) 528.00
Total Team Working Hours (h) 3 168.00
Average Hourly Rate (€) 14.00
Total Personnel Cost (€) 44 352.00
Material Cost per Prototype (€) 155.25
Total Development Cost incl. Prototype (€) 44 507.25
Quality Metrics & Requirements

To ensure the smart water bottle prototype meets all functional, safety, and performance requirements, a set of measurable quality metrics has been defined. These metrics are based on the intended design performance of the prototype and will be used during testing and validation.

Metric Description Threshold Review Method
Physical Dimensions Bottle size must remain practical and portable for daily use Height 25–27 cm, diameter 7–8 cm Physical measurement
Weight & Ergonomics Bottle should remain comfortable to carry when empty or full Empty weight 300–380 g, full weight below 900 g Scale measurement and user handling review
Water Quality Monitoring TDS sensor should provide useful and stable water quality readings Within ±10 % of calibrated reference values Sensor calibration and comparison testing
Temperature Monitoring Temperature sensor should provide reliable readings Within ±2 °C of reference values Reference thermometer comparison
Water Level Detection Pressure sensor should correctly identify fill level states Empty, half-full, full states detected correctly Controlled fill testing
Motion & Orientation Accelerometer should detect movement and bottle position Correct detection of movement and upright state Functional testing
Energy Efficiency System should minimize unnecessary power consumption Idle below 100 mW, normal use below 1 W Power consumption measurement
Battery Runtime Battery should provide practical daily autonomy Estimated 2–7 days per charge depending on use Runtime testing
Charging Performance Charging system must safely recharge battery pack Stable charging with no overheating Charging cycle observation
Water Resistance Electronics housing must resist splashes and normal cleaning No internal moisture ingress Splash and sealing inspection
UV-C Safety Control UV-C may only activate in safe operating condition Activation only when bottle is fully closed Safety logic testing
Electrical Protection Internal electronics must be protected from faults Fuse, BMS, and regulators function correctly Electrical inspection
Mechanical Durability Bottle must withstand normal daily handling No damage during normal use Handling and inspection
User Interface Visibility LEDs should clearly communicate bottle status Visible and understandable indicators Functional review
System Reliability System should operate consistently without failure Stable operation during extended use Long-duration operation testing
Review and Validation Process

As the final prototype is still under development, the table above defines the intended quality requirements and planned validation criteria. Once assembly is complete, all metrics will be reviewed and validated by the project team through practical testing, calibration, inspection, measurements, and functional verification.

The team will record the measurement results, compare them with the defined thresholds, and identify any deviations. Any requirement that does not meet its target value will be addressed through design improvements, software calibration, or component adjustment before final approval.

Acceptance Criteria

The smart water bottle prototype will be considered acceptable when the project team has verified that all defined quality thresholds are achieved and no functional or safety issues remain.

The stakeholder analysis is meant to assist the project group to understand who has interest and power over the project. It is a way to recognise who will be affected by the final product and to be able to categorize everyone involved in order to plan how the project group will interact with them throughout the project.

Based on the Mendelow Matrix 2 will be split into four separate groups: Key Figure, Influencer, Interested and lastly, Spectator. All the stakeholders would be placed against 2 axes, representing their interests and influence. As this is an internal project, the number of stakeholders is limited.

Mendelow Matrix

  • Key Figures (High Interest, High Influence): Clients, Lecturers / Coordinator, Project Group
  • Influencers (Low Interest, High Influence): ISEP Board, Competitors
  • Interested (High Interest, Low Influence): Material Providers, Future Investors
  • Spectators (Low Interest, Low Influence): Logistic Partners
Figure 2: TRAQUA Stakeholder

Analysis of Stakeholders

Spectators — Logistics Partners: While not directly involved, they may eventually experience benefits from an improved inspection system. However, their role is passive, and they will not influence or interact with the project.

Interested:

  • Material Providers: Supply components and materials; their pricing, availability, and lead times directly affect the project budget and timeline.
  • Future Investors: Will potentially invest money into the product, so a close eye must be kept on their expectations.

Influencers:

  • ISEP Board: Though not actively participating, defines academic frameworks and grading guidelines.
  • Competitors: TRAQUA must keep a sharp eye on what competitors develop while keeping their product fresh and at a decent price.

Key Figures:

  • Clients: Central to the project's direction and success — they define the problem and validate the solution.
  • Lecturers / Coordinator: Advise the group, evaluate project quality, offer ongoing feedback, and determine part of the final grade.
  • Project Group: The student developers have the most motivation to succeed and interest in creating a functional system.

Communication Strategy

Each stakeholder group requires a different communication approach based on their position in the Mendelow Matrix. The table below summarizes how the project group communicates with each party and what is expected in return.

Stakeholder Strategy From us → them From them → us Channel Frequency
Clients Manage Closely Progress updates, prototype demos, design decisions, clarification requests Requirements, feedback, validation, priority changes Meetings, email, demos Bi-weekly + milestones
Lecturers / Coordinator Manage Closely Deliverables, reports, presentations, questions Feedback, grading criteria, guidance, corrections Scheduled meetings, email, Moodle/wiki uploads Weekly + each deliverable
Project Group Manage Closely Task status, blockers, decisions, shared documents Same — bidirectional Daily standups, Discord/Teams, Git, shared drive Daily
ISEP Board Keep Satisfied Final deliverables, compliance with academic standards Academic framework, regulations, grading rules Formal submissions via coordinator At defined academic checkpoints
Competitors Keep Satisfied (monitor) No direct communication Market info gathered via public sources (websites, patents, product releases) Market research, web monitoring Monthly scan
Material Providers Keep Informed Quotes requests, orders, specifications Pricing, availability, lead times, VAT, shipping Email, web forms, phone As needed during sourcing phases
Future Investors Keep Informed Pitch, final presentation, poster, brochure, leaflet Interest signals, questions, funding decisions Final presentation, marketing materials End of project
Logistic Partners Monitor (minimal effort) No active communication Passive — potential future end-users N/A (indirect) None during project

Communication principles:

  • Single point of contact: each external stakeholder is handled by one designated team member to avoid mixed messages.
  • Documentation: all formal communication (client meetings, lecturer feedback, supplier quotes) is logged in the project wiki.
  • Escalation: blockers are raised in the next standup; client/lecturer issues are escalated within 24 h.
  • Feedback loop: after each milestone, feedback received is reviewed in the following sprint planning.

TRAQUA uses a structured set of communication channels, each chosen for a specific purpose: fast internal coordination, formal documentation, stakeholder alignment, and customer engagement.

Internal Team Communication

  • WhatsApp — primary channel for day-to-day coordination, quick questions, and informal idea sharing. Fast and low-friction, ideal for immediate feedback.
  • Microsoft Teams — used to store documents, organize files, and hold formal online meetings when in-person is not possible. Channels are structured by workstream (e.g., Hardware, Software, Documentation, Marketing).
  • Jira — sprint backlog, task assignment, sprint retrospectives, and all sprint-related activities are documented and tracked here. Provides traceability from user story to delivered feature.
  • Git (repository) — source code, schematics, and technical drawings are version-controlled. Commit messages reference Jira tickets for traceability.
  • Project Wiki — central knowledge base for the report, meeting minutes, decisions, and deliverables.

Communication with Lecturers / Coordinators

Meetings with teachers are organized every Thursday. The team is obliged to share an agenda by Tuesday evening at the latest so that teachers can prepare any necessary materials. These meetings are used to show the team's progress, ask questions, and share ideas.

After each teacher meeting, the team gathers to hold a retrospective and discuss the upcoming sprint. Outcomes are logged in Jira and the wiki.

Item Detail
Frequency Weekly (Thursday)
Agenda deadline Tuesday 23:59
Channel In-person / Teams
Output Meeting minutes in wiki, action items in Jira
Escalation Email to coordinator for urgent issues

Communication with Clients

Clients define the problem and validate the solution, so regular structured contact is essential.

  • Bi-weekly progress meetings — demo current state, gather feedback, confirm direction.
  • Milestone demos — aligned with major deliverables (interim presentation, functional tests, final prototype).
  • Email — for formal questions, requirement clarifications, and document sharing.
  • Meeting minutes shared within 24h of each meeting to confirm understanding.

Communication with Suppliers

To maintain good contact with suppliers, regular meetings are planned every one to two months. This allows both the supplier and TRAQUA to gather all their information and questions and discuss everything together, instead of sending scattered emails throughout the week or month. This batching saves everyone from dealing with many small tasks.

  • Primary channel: email for quotes, orders, and specifications.
  • Backup channel: phone for urgent availability or lead-time issues.
  • Single point of contact: one team member owns each supplier relationship to avoid mixed messages.
  • Documentation: all quotes, confirmations, and delivery dates are archived in the Teams supplier folder.

Communication with Customers

Customers will have the opportunity to subscribe to a free newsletter that will update them on the company's goals and provide additional composting tips. The application will also include easy access to customer support, ensuring that all customers can reach the company easily.

  • Newsletter — monthly, opt-in, covering company updates and composting tips.
  • In-app support — chat / contact form for direct questions.
  • Social media — for announcements, community engagement, and marketing.
  • Response SLA — customer support queries answered within 48h.

Communication with Charities / Partners

To keep charities involved, the company will organize regular meetings with them to discuss relevant topics. This helps maintain strong and high-quality partnerships.

  • Frequency: quarterly alignment meetings.
  • Purpose: discuss joint initiatives, impact reporting, and upcoming campaigns.
  • Channel: in-person or video call, minutes shared afterward.

Communication Tools Summary

Tool Purpose Audience
WhatsApp Fast internal chat Project Group
Microsoft Teams File storage, formal meetings Project Group, Lecturers
Jira Sprint management, task tracking Project Group
Git Version control (code, schematics) Project Group
Wiki Documentation, knowledge base Project Group, Lecturers
Email Formal external communication Lecturers, Clients, Suppliers
Newsletter Customer engagement Customers
In-app support Customer service Customers

Communication Principles

  • Right channel for the right message: urgent = WhatsApp; formal = email; technical = Jira/Git; knowledge = wiki.
  • Asynchronous by default: written communication preferred to respect everyone's schedule; meetings reserved for decisions and alignment.
  • Document everything: every meeting produces minutes; every decision is logged.
  • Acknowledge receipt: messages requiring action are acknowledged within 24h, even if the full answer comes later.
  • No silent blockers: any blocker is raised in the next standup or immediately via WhatsApp if critical.

Communication Risks & Mitigation

Risk Impact Mitigation
Message overload on WhatsApp Important info gets lost Use Teams/Jira for anything needing traceability; WhatsApp only for quick sync
Supplier delays in response Sourcing timeline slips Contact multiple suppliers in parallel; escalate after 5 business days of silence
Client unavailable for feedback Design decisions blocked Book meetings 2 weeks in advance; have a backup decision-maker identified
Missed lecturer agenda deadline Meeting less productive Recurring Tuesday reminder in team calendar
Meeting minutes not documented Decisions forgotten / disputed Rotating minute-taker role, published within 24h
Single point of failure on a channel Team member unreachable Key info duplicated in wiki; no critical info lives only in chat

This chapter identifies the risks that may arise during the TRAQUA project and defines how they will be prevented, monitored, and managed if they occur. Each risk is assessed on two dimensions: likelihood (how probable it is) and severity (how damaging the impact would be). The product of these two gives a risk score, which determines the priority for mitigation.

Risk Classification

Risks are categorized by type to make it easier to assign ownership and response strategy:

  • Project risks — affect schedule, scope, budget, or team capacity
  • Technical risks — affect hardware, firmware, software, or integration
  • Operational risks — affect day-to-day execution and infrastructure
  • Safety & environmental risks — affect user safety, health, or the environment
  • Security & data risks — affect confidentiality, integrity, and privacy

Likelihood and Severity Scales

Level Likelihood Severity
1 Improbable Negligible
2 Remote Marginal
3 Possible Moderate
4 Likely Critical
5 Frequent Catastrophic

Risk score = Likelihood × Severity. Scores are interpreted as:

  • 1–4 Low — accept and monitor
  • 5–9 Medium — actively mitigate
  • 10–15 High — mitigate before development milestones
  • 16–25 Critical — must be addressed before the project advances

Risk Register

The table below lists all identified risks, their assessment, the prevention strategy (applied before the risk occurs), and the response plan (applied if it does occur). Each risk has an assigned owner responsible for monitoring it throughout the project.

ID Risk Category Likelihood Severity Score Prevention Response Owner
R01 Common illness Project Possible (3) Marginal (2) 6 Good health practices; clear task documentation so work isn't siloed Redistribute tasks temporarily; extend deadline if critical path affected Project Manager
R02 Tasks not completed on time Project Possible (3) Moderate (3) 9 Realistic planning with buffer; weekly sprint reviews; early flagging of blockers Replan sprint; reprioritize backlog; notify supervisor Project Manager
R03 Lack of technical knowledge Project Likely (4) Moderate (3) 12 Skills gap analysis at kickoff; training time allocated; mentor/supervisor support Pair programming; request expert help; simplify scope if blocker persists Technical Lead
R04 Team member departure Project Possible (3) Critical (4) 12 Strong communication; documented processes; cross-training on key tasks Reassign tasks; revise scope; escalate to supervisor Project Manager
R05 Loss of data / code Operational Remote (2) Moderate (3) 6 Git version control; cloud backups (GitHub + Drive); weekly backup checks Restore from most recent backup; document lost work All members
R06 Insufficient testing Technical Remote (2) Critical (4) 8 Written test plan; automated tests where possible; peer review of test reports Extend testing phase; add regression tests; document known issues Technical Lead
R07 Budget overrun Project Possible (3) Moderate (3) 9 Component pricing confirmed before purchase; 15% contingency reserve Substitute cheaper components; deprioritize non-essential features Project Manager
R08 Data leaks Security & data Likely (4) Catastrophic (5) 20 Encrypted communication (TLS); secure credential storage; input validation; access control on BLE pairing Revoke affected keys; notify users; patch vulnerability; post-mortem review Technical Lead
R09 Battery failure / thermal runaway Safety Remote (2) Catastrophic (5) 10 Use certified Li-ion cells; include BMS protection circuit; thermal testing during prototype phase Disconnect battery; trigger product recall procedure if shipped Hardware Lead
R10 Application downtime Operational Frequent (5) Negligible (1) 5 Cloud auto-scaling; health checks; graceful degradation on frontend Auto-recovery; manual restart if needed; status page notification Technical Lead
R11 API downtime (third-party) Operational Remote (2) Marginal (2) 4 Retry logic with exponential backoff; fallback behaviour; cache recent responses Switch to fallback; notify users of degraded service Technical Lead
R12 Battery chemical residue Safety & environmental Improbable (1) Marginal (2) 2 Follow electronics safety protocols; use sealed battery compartments Follow hazardous waste disposal procedure Hardware Lead
R13 UV-C radiation exposure Safety Improbable (1) Negligible (1) 1 N/A for standard operation; enclosures if UV-C modules used Stop use immediately; consult safety documentation Hardware Lead
R14 Short circuit Technical Improbable (1) Negligible (1) 1 Circuit protection; certified components; PCB design review Isolate affected unit; check for damage before reuse Hardware Lead
R15 Supply chain delays (components) Operational Possible (3) Moderate (3) 9 Order components early; identify 2+ suppliers per critical part Substitute equivalent part; adjust schedule; notify supervisor Hardware Lead
R16 Scope creep Project Likely (4) Moderate (3) 12 Clearly defined backlog; change control for new requirements; supervisor sign-off Push new requests to backlog; renegotiate scope if essential Project Manager
R17 Sensor calibration drift Technical Possible (3) Moderate (3) 9 Use calibrated reference solutions; periodic recalibration; temperature compensation Recalibrate; flag readings; document drift pattern Technical Lead
R18 Poor team communication Project Possible (3) Moderate (3) 9 Weekly standup; shared tools (Slack/Discord, Trello); written meeting notes Address in retrospective; adjust communication rhythm Project Manager

Risk Matrix

The risk matrix below plots each risk by likelihood (y-axis) and severity (x-axis). Risks in the top-right corner are the highest priority.

Figure 3: Risk matrix — likelihood vs. severity

Risk Monitoring and Review

Identifying risks once is not enough — they must be tracked throughout the project. The following monitoring process will be applied:

  • Weekly risk check during sprint reviews: the team reviews the register and flags any change in likelihood or impact.
  • Milestone reassessment: before each major milestone (design review, prototype, interim report, final delivery), the full register is re-evaluated.
  • New risk intake: any team member can propose a new risk at any time; the Project Manager assesses it and adds it to the register.
  • Closed risks: once a risk is no longer relevant (e.g., a phase is completed), it is marked closed but kept in the register for traceability.

Detailed Risk Descriptions

R08 — Data leaks (score 20, Critical)

The system processes sensitive data — water-quality measurements tied to user accounts and potentially location information. Unauthorized access could affect all users, cause reputational damage, trigger legal liability under GDPR, and erode trust in the product. Likelihood is high because malicious actors routinely target IoT endpoints and mobile APIs, and attack surface grows with user count. Prevention focuses on encryption in transit and at rest, strict access control on BLE pairing, and input validation. If a breach occurs, the response is to revoke affected credentials immediately, notify affected users within the GDPR 72-hour window, patch the vulnerability, and conduct a post-mortem.

R09 — Battery failure (score 10, High)

The TRAQUA device uses a rechargeable Li-ion battery. While rare in modern hardware, thermal runaway can cause physical harm or property damage — making severity catastrophic despite low likelihood. Prevention relies on certified cells with an integrated Battery Management System (BMS), thermal testing during prototyping, and sealed battery compartments. Response: immediate disconnection, and if shipped units are affected, a recall procedure in coordination with the supervisor.

R03 — Lack of technical knowledge (score 12, High)

The project combines electronics, firmware (ESP32), mobile app development, and sensor calibration — a broad stack that no single team member fully masters at the start. Prevention is done through a skills gap analysis at kickoff, allocated training time in the first sprints, and proactive use of the supervisor and external mentors. If a specific blocker arises during development, the response is pair programming, requesting expert help, or simplifying scope on the affected feature rather than letting it block the critical path.

R16 — Scope creep (score 12, High)

As the project progresses, stakeholders or team members may propose new features that seem small individually but collectively derail the schedule. Prevention is a clearly defined backlog with supervisor-approved scope, and a change-control rule: any new requirement is added to the backlog, not to the current sprint. Response: if a new request is genuinely essential, an existing item is removed to make room, keeping total scope constant.

R04 — Team member departure (score 12, High)

If a team member drops out mid-project, remaining members absorb their workload, which can cascade into further delays. Prevention relies on documentation (so no knowledge is locked in one person's head), cross-training on critical tasks, and early warning signs picked up in weekly retrospectives. Response: immediate task reassignment, scope revision if needed, and escalation to the supervisor.

Procurement Management Strategy

The procurement strategy was designed to ensure that all required components are available on time and that the project can progress without delays. The main focus was on selecting components that are reliable, compatible, and easy to obtain within the project timeframe.

Each component was reviewed before ordering to confirm that it meets the system requirements and can be integrated without issues. This reduced the risk of delays caused by incorrect or incompatible parts and helped keep the procurement process organized.

Make vs Buy Decisions

Most components were purchased, especially electronic parts such as sensors, the microcontroller, and the display. These components require precise manufacturing and are not practical to produce within the project.

Some mechanical aspects, such as the internal mounting and positioning of components inside the bottle, were designed and assembled by the team. This allowed flexibility during prototyping and made it easier to adjust the design when needed.

Suppliers and Procurement Planning

Suppliers were selected based on availability, delivery time, and reliability. Multiple suppliers were used to ensure that all required components could be sourced without delays. Preference was given to suppliers that provide clear specifications and consistent stock levels.

Procurement was carried out in phases. Components needed for early testing were ordered first, allowing development and prototyping to begin as soon as possible. Less critical components were ordered later, once the design was more finalized. This approach reduced the risk of ordering unnecessary or incompatible parts.

Risk Management

To reduce procurement risks, alternative components and backup suppliers were identified for critical parts. Datasheets were carefully reviewed before ordering to ensure compatibility. Procurement was started early to allow enough time to handle delays, missing parts, or specification issues.

This structured approach ensured a smooth procurement process and supported steady project progress.


Procurement Table
Item Supplier Backup Supplier Manufacturer Quantity Lead Time (Days) Notes
TDS Sensor (SEN0244) Mauser DigiKey TPXCKZ 1 2–4 Water quality measurement
MOSFET (IRLZ44N) Mauser DigiKey Infineon 1 1–3 Switching component
Battery (NCR18650B) Mauser Grandado Panasonic 3 2–4 3S pack power supply
BMS (3S) Mauser DigiKey Generic 1 2–4 Battery protection and balancing
Battery Holder (1×18650) Mauser Farnell Generic 3 2–4 Cell mounting
Charging Port (DC connector) Mauser DigiKey Generic 1 2–4 External charger input
Buck Converter (LM2596) Mauser Grandado Generic 1 2–4 12 V → 5 V regulation
Magnetic Reed Switch (SPST-NO) Mauser Farnell Generic 1 2–4 Circuit-killer at bottle base
Fuse (1 A, 5×20 slow blow) Mauser DigiKey Eska 1 1–3 Overcurrent protection
Fuse Holder (5×20) Mauser Farnell Generic 1 1–3 Fuse mounting
Breadboard (Protoboard 50×70) Mauser DigiKey Generic 1 1–3 Prototype circuit board
1.1 mm Wire (AWG26) Mauser DigiKey Goobay 1 1–3 UV-C light wiring
Accelerometer (LIS3DHTR) Kiwi Electronics Farnell STMicroelectronics 1 3–6 Motion and orientation detection
UV-C LED Module Fruugo DigiKey Generic 1 5–8 Water sterilisation
Pressure Sensor (FSR406) Fruugo Fruugo JETTING 1 5–8 Water level measurement
Temperature Sensor (KY-015 DHT) Fruugo Fruugo AOKIN 1 5–8 Temperature and humidity sensing
Breadboard Kit Joom Fruugo Generic 1 5–10 Wires, resistors, LEDs, etc.
Activated Carbon Filter Joom Fruugo Generic 1 5–10 Improves taste
Microcontroller (ESP32 DevKit V1) Joom Fruugo Espressif 1 5–10 Main controller
Charger (3S 12.6 V / 2 A) Joom Worten Generic 1 5–10 External battery pack charger
Total Components - - - 22 items - All required parts

The project was structured across eight sprints, preceded by a pre-work phase dedicated to topic selection and initial setup. Each sprint spans approximately one week, running from early March to late June 2026. Project management was handled in Jira, where all tasks were tracked and assigned across the team. The Gantt chart provides a visual overview of the planned timeline, grouping activities by sprint and category.

The pre-work phase covered foundational Scrum activities — stand-ups, retrospectives, and sprint demos — as well as general activities including role assignment. Sprint 1 focused on initial research, documentation, and structural work. Subsequent sprints progressively addressed design, prototyping, coding, and testing. The final sprints are dedicated to the interim and final reports, functional tests, packaging solutions, and multimedia deliverables such as the video, flyer, and poster. This iterative approach allowed the team to review progress regularly through retrospectives and adapt the backlog accordingly, ensuring continuous alignment with project goals.

Figure 4 shows the timeline and backlog with epics in Jira. Some timeline start and end dates are not visible, as the corresponding user stories have either not started yet or have already been completed. Dates in Jira were aligned with the deliverable deadlines defined in the Time chapter.

Figure 4: Jira Timeline

Figures 5 and 6 present the same schedule as a Gantt chart produced in Excel, offering an alternative view to the Jira timeline.

Figure 5: Gantt chart part 1
Figure 6: Gantt chart part 2

Gantt Chart and Key Project Phases

The project timeline spans from late February 2026 to late June 2026, structured across eight iterative sprints plus a pre-work phase. The Gantt chart illustrates the full schedule, with tasks grouped by sprint and color-coded by category. The project was divided into five key phases:

  • Pre-work and Setup (Feb 23 – Mar 5): project selection, initial Scrum setup, role assignment, and backlog definition. Milestone: top-3 project proposals submitted by February 28.
  • Research and Documentation (Sprint 1–2, Mar 5–19): research on water quality and filtration, black box system diagram and structural drafts (milestone: March 11), and the initial list of components and materials (milestone: March 18).
  • Design and Intermediate Deliverables (Sprint 3–4, Mar 19 – Apr 16): detailed system schematics, structural drawings, cardboard modelling (milestone: March 25), Gantt chart and sprint plan publication (milestone: March 21), Interim Report and Presentation submission (milestone: April 12), and the Interim Presentation event (milestone: April 16).
  • Prototyping and Development (Sprint 5–6, Apr 16 – May 27): 3D model video (milestone: April 22), final materials list (milestone: April 29), refined interim report (milestone: May 2), backend and frontend coding, ESP32 integration, agent connectivity, and packaging solutions (milestone: May 13). Functional tests concluded and uploaded by May 27.
  • Final Deliverables and Presentation (Sprint 7–8, Jun 1–25): final report, paper, video, poster, and manual (milestone: June 13), final presentation and individual assessment (milestone: June 18), corrected and refined deliverables (milestone: June 23), and prototype demonstration to the client (milestone: June 25).

Mapping the Plan to Iterative Sprints

The project was managed using an agile Scrum framework, with each week constituting a new sprint. Each sprint followed a consistent structure: a planning session at the start, daily stand-ups throughout, and a retrospective and sprint demo at the end. This iterative approach allowed the team to regularly assess progress, incorporate teacher and peer feedback, and adjust priorities accordingly.

The product backlog was defined during the pre-work phase and broken down into sprint backlogs at the start of each sprint. Each sprint had a clear goal aligned with the upcoming milestone deadlines.

Backlog Management

The backlog was managed exclusively in Jira. At the beginning of each sprint, the team held a planning session to select tasks based on priority and the upcoming milestones. Each task was assigned to a team member and tagged with its parent epic (e.g., Research, Design, Documents, Code, Prototype, Tests, Interim Report).

During the sprint, tasks moved through four states: To Do, In Progress, In Review, and Done. Tasks not completed by the end of a sprint were reviewed in the retrospective and either carried over to the following sprint or re-prioritized in the backlog.

Prioritization

Prioritization was driven primarily by milestone deadlines defined in the Time chapter. Tasks blocking an upcoming deliverable (e.g., the Interim Report on April 12) were assigned the highest priority, regardless of their epic. Within a single sprint, the team applied a simple MoSCoW-style logic:

  • Must have: tasks on the critical path to the next milestone.
  • Should have: tasks improving deliverable quality but not blocking submission.
  • Could have: nice-to-have improvements deferred if capacity ran short.
  • Won't have (this sprint): items pushed to a later sprint or the backlog.

External dependencies (component delivery times, supplier responses, lecturer feedback) were also considered, with dependent tasks scheduled only after their inputs were confirmed available.

Estimation

Tasks were estimated in story points during sprint planning, with the team converging on a value through brief discussion rather than formal planning poker. The total points committed per sprint ranged from roughly 30 to 50, depending on team availability and the complexity of upcoming deliverables.

Two main challenges emerged with estimation. First, the team's mixed academic backgrounds made it difficult to estimate cross-disciplinary tasks consistently — a task that seemed small to one member could be substantial for another. This was addressed by having the assigned member propose the initial estimate and the rest of the team challenge it only when there was strong reason to.

Second, in early sprints the team noticed that story points were marked as burned down before all sub-tasks of a parent story were closed, which distorted burndown charts. Following Sprint 3, the Definition of Done was updated to require all sub-tasks to be closed before the parent story could be moved to Done, restoring burndown accuracy in subsequent sprints.

Mapping the Plan to Iterative Sprints

The project was managed using an agile Scrum framework, with each week constituting a new sprint. Each sprint followed a consistent structure: a planning session at the start, daily stand-ups throughout, and a retrospective and sprint demo at the end. This iterative approach allowed the team to regularly assess progress, incorporate feedback, and adjust priorities accordingly.

The product backlog was defined during the pre-work phase and broken down into sprint backlogs at the start of each sprint. Each sprint had a clear goal aligned with the upcoming milestone deadlines.

The backlog was managed exclusively in Jira. At the beginning of each sprint, the team held a planning session to select tasks based on priority and the upcoming milestones. Each task was assigned to a team member and tagged with its parent epic (e.g., Research, Design, Documents, Code, Prototype, Tests, Interim Report). During the sprint, tasks moved through four states: To Do, In Progress, In Review, and Done. Tasks not completed by the end of a sprint were reviewed in the retrospective and either carried over to the following sprint or re-prioritized in the backlog.

Prioritization and estimation are described in detail in the Project Plan section above.

Sprint 1: Foundation & Research

Period: March 5, 2026 – March 12, 2026

Sprint 7 was characterized by a heavy “Discovery” phase. The team focused on setting up the technical environment (TRAQ-43) and conducting deep-dive research into water quality and filtration systems. Because this was the inaugural sprint, a significant amount of time was spent refining the backlog and defining the complexity of the Interim Report. The team considered a total of 40 story points.

Key work streams Environmental Setup: Establishing the Scrum framework and project architecture. Technical Research: Analyzing water levels and filtration logic (TRAQ-11 through 14). Documentation: Initial drafting of the Background and Related Work sections for the interim report.

Figure 7: Sprint 1 burndown chart

Sprint 2 Cor Development and Reporting

Period: Thursday, March 12th – Wednesday, March 19th, 2026

Sprint 8 followed a Thursday-to-Wednesday cycle. This schedule proved challenging this week due to a school trip on Friday, followed immediately by the weekend. This resulted in an unavoidable “stagnation period” at the very start of the sprint where no points could be burned down.

Figure 8: Sprint 2 …

Sprint 3 Strategic Completiong & Prototyping

Period: March 19th, 2026 – March 26th, 2026

Narrative Summary Sprint 9 marked a transition from theoretical research to tangible outputs. The team successfully cleared the “documentation backlog” by finalizing heavy-weight chapters of the Interim Report. Simultaneously, the project moved into the design and physical modeling phase, with the creation of structural drawings and a physical cardboard model to validate the system's dimensions.

Technical Learning Point The team identified a discrepancy in how Story points are calculated when sub-tasks remain open. Moving forward, the Definition of Done (DoD) has been updated to ensure all granular tasks are closed before the parent Story is moved to “Done” to maintain burndown accuracy.

Figure 9: Figure 10: Sprint 3 chart

Sprint 4: Interim Presentation and Interim Report

Period: March 26th, 2026 – April 1st, 2026

Sprint Goal: Finish up the Interim Report.

Sprint 10 was the final push toward a major project milestone. The team's efforts were almost entirely dedicated to consolidating research and development into the Interim Presentation and finalizing the core technical chapters of the report. This sprint confirmed the total unviability of the “Thursday-start” schedule, as the pressure to deliver high-point items was concentrated entirely into the final two days of the cycle.

Figure 10: Figure 4: Sprint 4 Burndown Chart showing the “Late-Crunch” pattern.

This section evaluates the effectiveness of each sprint by reflecting on what went well and what could be improved. It includes insights into challenges faced, team performance, and lessons learned to optimize future sprints.

The teams' first retrospective 11 underlined a few issue, the teams' different education backgrounds and spead, the teams' general workload, and the main idea being lost. To fix this the team decided to book a 1-hour meeting where everyone spoke for 5 min and said where they think the project should head. This helped us find common ground. Other conclusions regarding speed and workload: the team decided to meet more often to spread the workload fairly through the team.

Figure 11: First retrospective

The teams' second retrospective 12 underlines less issues then the first sprint. The team also wrote what they will be improving inside the retrospective.

Figure 12: Second retrospective

Retrospective 13 mentions 3 main issues: sometimes it can be hard to define an equal amount of work for the team, some members being late, and lastly, some members still not being exactly familiar with the Scrum environment. The team decided to arrange a meeting where they would go over Scrum again and also to have longer sprint planning to properly define workload. General progress is still going good.

Figure 13: Third retrospective

Retrospective 14 was the retrospective before the start of the vacation. The team now is getting ready for their interm report. No major issues faces the team will contact directly teachers regarding certain documents.

Figure 14: Fourth retrospective
  • report/prm.txt
  • Last modified: 2026/04/28 13:50
  • by team3