Plan Stage (original) (raw)
Plan teams:
The responsibilities of this collective team are described by the Plan stage. Among other things, this means working on GitLab’s functionality around issues, boards, milestones, to-do list, issue lists and filtering, roadmaps, time tracking, requirements management, notifications, value stream analytics (VSA), wiki, and pages.
- I have a question. Who do I ask?
In GitLab issues, questions should start by @ mentioning the Product Manager for the corresponding Plan stage group. GitLab team-members can also use #s_plan.
For UX questions, @ mention the Product Designers on the Plan stage; Nick Leonard for Plan:Project Management, Nick Brandt for Plan:Product Planning, and Libor Vanc for Plan:Optimize. Plan:Knowledge should follow the process for groups without a designer.
How we work
- In accordance with our GitLab values.
- Transparently: nearly everything is public, we record/livestream meetings whenever possible.
- We get a chance to work on the things we want to work on.
- Everyone can contribute; no silos.
- We do an optional, asynchronous daily stand-up in #s_plan_standup.
Workflow
We work in a continuous Kanban manner while still aligning with Milestones and GitLab’s Product Development Flow.
Capacity Planning
When we’re planning capacity for a future release, we consider the following:
- Availability of the teams during the next release. (Whether people are out of the office, or have other demands on their time coming up.)
- Work that is currently in development but not finished.
- Historical delivery (by weight) per group.
The first item gives us a comparison to our maximum capacity. For instance, if the team has four people, and one of them is taking half the month off, then we can say the team has 87.5% (7/8) of its maximum capacity.
The second item is challenging and it’s easy to understimate how much work is left on a given issue once it’s been started, particularly if that issue is blocking other issues. We don’t currently re-weight issues that carry over (to preserve the original weight), so this is fairly vague at present.
The third item tells us how we’ve been doing previously. If the trend is downwards, we can look to discuss this in our retrospectives.
Subtracting the carry over weight (item 2) from our expected capacity (the product of items 1 and 3) should tell us our capacity for the next release.
Estimating effort
Groups within Plan use the same numerical scale when estimating upcoming work.
When weighting an issue for a milestone, we use a lightweight, relative estimation approach, recognizing that tasks often take longer than you think. These weights are primarily used for capacity planning, ensuring that the total estimated effort aligns with each group’s capacity for a milestone.
Key Principles
- Relative Estimation: We focus on the relative complexity and effort required for each issue rather than precise time estimates.
- Aggregate Focus: The sum of all issue weights should be reasonable for the milestone, even if individual issues vary in actual time taken.
- Flexibility: It’s acceptable for an issue to take more or less time than its weight suggests. Variations are expected due to differences in individual expertise and familiarity with the work.
Weight Definitions
Weight | Meaning |
---|---|
1 | Trivial, does not need any testing |
2 | Small, needs some testing but nothing involved |
3 | Medium, will take some time and collaboration |
4 | Substantial, will take significant time and collaboration to finish |
5 | Large, will take a major portion of the milestone to finish |
Initial Planning: During milestone planning, tasks can be estimated up to a weight of 5 if necessary. However, as the milestone approaches and the team moves closer to starting implementation, any task with a weight larger than 3 should be decomposed into smaller, more manageable issues or tasks with lower weights.
Why This Approach: Allowing larger weights early on provides flexibility in high-level planning. Breaking down tasks closer to implementation ensures better clarity, reduces risk, and facilitates more accurate tracking and execution.
We assess the available capacity for a milestone by reviewing recent milestones and upcoming team availability. This ensures that our milestone planning remains realistic and achievable based on the collective effort estimated through these weights.
Issues
Issues have the following lifecycle. The colored circles above each workflow stage represents the emphasis we place on collaborating across the entire lifecycle of an issue; and that disciplines will naturally have differing levels of effort required dependent upon where the issue is in the process. If you have suggestions for improving this illustration, you can leave comments directly on the whimsical diagram.
Everyone is encouraged to move issues to different workflows if they feel they belong somewhere else. In order to keep issues constantly refined, when moving an issue to a different workflow stage, please review any open discussions within the issue and update the description with any decisions that have been made. This ensures that descriptions are laid out clearly, keeping with our value of Transparency.
Epics
If an issue is > 3 weight
, it should be promoted to an epic (quick action) and split it up into multiple issues. It’s helpful to add a task list with each task representing a vertical feature slice (MVC) on the newly promoted Epic. This enables us to practice “Just In Time Planning” by creating new issues from the task list as there is space downstream for implementation. When creating new vertical feature slices from an epic, please remember to add the appropriate labels - devops::plan
, group::*
, Category:*
or feature label
, and the appropriate workflow stage label
- and attach all of the stories that represent the larger epic. This will help capture the larger effort on the roadmap and make it easier to schedule.
Design Documents
For all tier T1 and T2 roadmap items, and initiatives spanning multiple milestones, we recommend creatinga design document using theArchitecture design workflow. This approach offers several benefits:
- Single Source of Truth (SSOT): A design document serves as the central place for all important information related to the initiative, reducing time spent searching for decisions across various places.
- Increased Visibility: By creating design documents, we raise awareness of the work done in the Plan stage, such as the work items framework, customizable Work Item Types, custom fields, custom status,GLQL, frontend-driven views, and many more.
- Discoverability: Design documents are easily accessiblethrough our public handbook, aligning with engineering best practices.
- Collaborative Decision-Making: Changes and discussions occur through merge requests, ensuring visibility to all involved team members.
- Comprehensive Entry Point: The design document functions as a primary entry point for the initiative, containing:
- An executive summary
- Links to related epics, issues, and wiki pages
- Links to Status updates
- Implementation details
- A decision log or embedded decisions within the document
- Links to relevant boards or dashboards
This comprehensive approach allows easy onboarding for team members and provides stakeholders with all necessary information in one place.
This is the recommended workflow for all initiatives:
- Create a Slack channel with the convention #f_[feature name].
- Develop a design document using the Architecture evolution workflow. Get started using this template. You don’t need to fill out all sections. This is a living document and it’s expected that it evolves over time.
- An epic hierarchy is created with sub-epics mapping to iterations, each achievable within a milestone.
- Iterations are broken into multiple issues that can be accomplished independently, and PMs schedule those as normal.
- Other actions may be established, such as regular ‘office hours’ calls.
Team members should collaborate to continuously refine the iterations and update the design document as complexity is revealed. This approach ensures that all stakeholders have a clear, up-to-date understanding of the initiatives’s progress and implementation details.
Roadmap
In product development at GitLab, Product is responsible for the what and why, Engineering is responsible for the how and when [1]. Maintaining a credible roadmap is therefore a collaborative process, requiring input from both.
The Product Roadmap outlines what the team aims to accomplish over a 4-6 quarter timeline. It is shared across the organization to ensure alignment with the go-to-market strategy and enable reliable commitments to customers.
Changes to the Plan Product Roadmap, made by the Product Manager, are reviewed and accepted by the Engineering Manager of the affected group. This happens at least once a month and is captured in a Wiki Page.
Most items being reviewed during roadmap planning have not yet had detailed technical investigation from engineering. Planning at this resolution is intended to be thoughtful but not perfect. Velocity remains our priority.
Reviewing the Roadmap
By performing a review, Engineering Managers play a key role by ensuring the roadmap is achievable and effectively sequenced to maximize velocity. Below are some best practices to guide a thoughtful review:
- Assess Achievability: Is the timeline realistic given the team’s current capacity, skills, and dependencies?
- Account for Technical Preparation: Does the roadmap allocate time for necessary technical preparation, such as technical spikes or investigations?
- Optimize Team Utilization: Does the sequence of work align with the team’s skill profile, avoiding periods of underutilization or skill mismatches?
- Evaluate Redundancy: How robust is the rest of the roadmap if one item takes longer than anticipated?
- Clarify Requirements: Do you sufficiently understand each proposed change or do you need additional information?
- Ensure Shared Understanding: Do you and your Product and UX counterparts have a shared understanding of all terminology used?
- Seek Opportunities to Optimize: Have you identified opportunities to iterate or increase velocity by adjusting the order of work?
- Reduce Friction: Is the sequence of work likely to cause avoidable conflicts, such as multiple engineers committing to the same codebase areas simultaneously?
- Identify Process-Driven Delays: Are there items expected to take longer due to process requirements (e.g., multi-version compatibility) rather than capacity constraints?
- Account for Cross-Team Dependencies: Are there cross-team dependencies that could put parts of the timeline at risk?
- Incorporate a Buffer: Is a proportion of capacity allowed for exogenous shocks; such as unexpected PTO, or a high-severity incident?
- Lean on Your Experience: When you look at the roadmap as a whole and think about recent quarters, does it look achievable?
Roadmap Organization
graph TD; A["devops::plan"] --> B["group::"]; B --> C["Category:"]; B --> D["non-category feature"]; C --> E["maturity::minimal"]; C --> F["maturity::viable"]; C --> G["maturity::complete"]; C --> H["maturity::lovable"]; E--> I["Iterative Epic(s)"]; F--> I; G --> I; H --> I; D --> I; I--> J["Issues"];
Executing on the Roadmap
Every Roadmap commitment has a Directly Responsible Individual (DRI) for its delivery, which is typically a Tech Lead who leads project management activities such as clarifying scope, coordinating dependencies, and communicating progress. If no engineer in the group has the capacity to assume a Tech Lead role, the Engineering Manager (EM) may step in. The EM is ultimately accountable for overall roadmap execution and cross-team coordination in either case.
The project manager clarifies scope, identifies dependent work, appoints DRIs for work streams, and ensures risks and blockers are prioritized.
The DRI maintains a Wiki page or design document for the project containing a project timeline, project status, links to work items, key participants, a dogfooding proposal, and a decision register. This is encouraged for all important projects, especially Tier 1 and Tier 2 Roadmap commitments. It acts as a Single Source of Truth (SSoT) that greatly improves cross-functional collaboration and ensures decisions made are captured. Previous examples are:
Internal Testing
Plan Engineering regularly tests new functionality internally before releasing to customers. As part of a drive to improve quality in the work we deliver to customers, this process is divided into two parts.
Alpha Testing
Testing that occurs during ongoing development. This is limited to GitLab’s subgroups or projects other than gitlab-org
, gitlab-com
, or gitlab-org/gitlab
. The Plan Stage has two groups that are available for testing on: gl-demo-ultimate-plan-stage and gitlab-org/plan-stage.
End-of-line testing
End-of-line (EOL) testing is the final step before release to customers. The finished product is delivered to all GitLab team-members, usually by enabling it for the gitlab-com
and gitlab-org
groups. This is accompanied by collection of internal feedback, typically using a feedback issue. The minimum duration of this period of testing is determined by the Engineering Manager.
No new scope will be accepted at this time without significant justification and without restarting the testing period. Only defects and fit & finish issues identified during testing will be addressed.
This practice ensures that, while there may be more than one item in end-of-line testing at the same time, the system under test resembles as closely as possible the one intended to be given to customers.
Dogfooding
Dogfooding helps to build confidence in feature readiness and identify shortcomings before they reach the customer. In most cases, if an improvement cannot be adopted for a useful workflow internally it should not be expected to land with customers either. Identifying a dogfooding opportunity ahead of time can help to reach consensus on what the minimum valuable change should include.
Dogfooding opportunities should be meaningful rather than hypothetical. A new workflow is adopted, an existing workflow complemented or improved, or made redundant.
Project leads should strive to implement dogfooding during the final testing phase and should expect to observe some adoption.
Talking With Customers
We aim to have cross-functional representation in every conversation we have with customers.
Customer Conversations calendar
Anyone who is scheduling a call with a customer via sales, conducting usability reasearch, or generally setting up a time to speak with customers or prospects is encouraged to add the Plan Customer Conversations calender as an invitee to the event. This will automatically populate the shared calendar with upcoming customer and user iteractions. All team members are welcome and encouraged to join – even if it’s just to listen in and get context.
You can subscribe to the calendar and invite it as a participant in a customer meeting that you are scheduling using the URL gitlab.com_5icpbg534ot25ujlo58hr05jd0@group.calendar.google.com.
Shadow a customer call
All team members are welcome and encouraged to join customer calls – even if it’s just to listen in and get context.
To ensure upcoming calls appear in your calendar, subscribe to the Plan Customer Conversations calendar. Product Managers add upcoming customer interviews to this calendar and you’re welcome to shadow any call.
- In GCal, next to “Other Calendars” in the left sidebar, click the +
- Select “Subscribe to Calendar”
- In the “Add Calendar” input, paste gitlab.com_5icpbg534ot25ujlo58hr05jd0@group.calendar.google.com
Upcoming customer calls will often be advertised in the #s_plan channel in advance, so look out there also.
Review previous calls
All recorded customer calls, with consent of the customer, are made available for Plan team-members to view in Dovetail.
To access these, simply go to the Plan Customer Calls project on Dovetail and log in with Google SSO. More information is available in the Readme of this project.
If you find you do not have access, reach out to a Plan PM and ask to be added as a Viewer.
Review previous UX Research calls
UX Research calls are scripted calls designed to mitigate bias and to address specific questions related to user needs and/or usability of the product. A selection of UX Research calls are available in the Plan Customer Calls Dovetail Project in the column titled UXR - Research and Validation.
Board Refinement
We perform many board refinement tasks asynchronously, using GitLab issues in the Plan project. The policies for these issues are defined intriage-ops/policies/plan-stage. A full list of refinement issues is available by filtering by the ~“Plan stage refinement” label.
Tracking Committed Work for an Upcoming Release
While we operate in a continuous Kanban manner, we want to be able to report on and communicate if an issue or epic is on track to be completed by a Milestone’s due date. To provide insight and clarity on status we will leverage Issue/Epic Health Status on priority issues.
Keeping Health Status Accurate
At the beginning of the Milestone, Deliverable issues will automatically be updated to “On Track”. As the Milestone progresses, assignees should update Health Status as appropriate to surface risk or concerns as quickly as possible, and to jumpstart collaboration on getting an issue back to “On Track”.
At specific points through the milestone the Health Status will be automatically degraded if the issue fails to progress. Assignees can override this setting any time if they disagree. The policy that manages this automation is here. It can be disabled for any individual issue by adding the ~“Untrack Health Status” label.
Health Status Definitions for Plan
- On Track - We are confident this issue will be completed and live for the current milestone
- Needs Attention - There are concerns, new complexity, or unanswered questions that if left unattended will result in the issue missing its targeted release. Collaboration needed to get back On Track
- At Risk - The issue in its current state will not make the planned release and immediate action is needed to rectify the situation
Flagging Risk is not a Negative
We feel it is important to document and communicate, that changing of any item’s Health Status to “Needs Attention” or “At Risk” is not a negative action or something to be cause anxiety or concern. Raising risk early helps the team to respond and resolve problems faster and should be encouraged.
OKRs
Active Quarter OKRs
FY25-Q2 Stage-level Objectives are available here (internal).
Previous Quarter OKRs
FY25-Q1 Stage-level Objectives all closed out between 74% and 88% and are available here (internal).
Drafting OKRs using GitLab
Guidance is available, including a video guide, on Approach to OKRs at GitLab. GitLab currently offers some freedom in how to structure OKR hierarchies. We take the following approach in Plan:
- EMs are encouraged to create group-level KRs under stage-level Objectives directly, without creating their own OKR structure.
- Group KRs and Stage Objectives should ladder into a higher Objective, which can exist anywhere in the organization. In the development of OKRs a stage-level Objective laddered directly into a CEO KR.
- They should be created or added as child objectives and key results of their parent so that progress roll-ups are visible.
- Product development goals are established in milestone planning, following the regular Product Development Flow, and not in OKRs.
Doing this ensures the hierarchy will be as simple, consistent and shallow as possible. This improves navigability and visibility, as we currently don’t have good hierarchy visualization for OKRs.
An example of a valid single OKR hierarchy is:
flowchart TD A[Plan Objective] --> B(Project Management KR) A --> C[Product Planning KR] A --> D[Optimize KR] A --> E[Knowledge KR] A --> K[Principal Engineer KR] A --> L[SEM KR]
Ownership is indicated using labels and assignee(s). The label indicates the group and/or stage, assignee the DRI.
OKRs should have the following labels:
- Group, Stage, and Section (as appropriate).
- Division (~“Division::Engineering”) to distinguish from other functions.
- updates::[weekly, semi-monthly, monthly] depending on how often the OKR is expected to be updated by the DRI.
Retrospectives
The Plan stage conducts monthly retrospectives asynchronously using GitLab issues. Monthly retrospectives are performed in a Confidential Issue made Public upon Close. Confidentiality of these Issues while Open aligns with GitLab SAFE Framework.
Where necessary, the use of Internal Notes is encouraged to further adhere to SAFE Guidelines. Internal notes remain confidential to participants of the retrospective even after the issue is made public, including Guest users of the parent group.
Examples of information that should remain Confidential per SAFE guidelines are any company confidential information that is not public, any data that reveals information not generally known or not available externally which may be considered sensitive information, and material non-public information.
Retrospective issues are created by a scheduled pipeline in theasync-retrospectives project. They are then updated once the milestone is complete with shipped and missed deliverables. For more information on how it works, see that project’s README.
Each EM is the DRI for conducting and concluding their group’s retrospective, along with summary and corrective actions.
The role of the DRI is to facilitate a psychologically safe environment where team-members feel empowered to give feedback with candour. As such they should refrain from participating directly. Instead they should publicise, conclude and make improvements to the retrospective process itself.
Timeline
- 27th (Previous Month) A retrospective issue is automatically created for the milestone in progress.
- 18th The milestone is closed and open issues in the build phase are labeled with ~“missed deliverable”.
- 21st The issue description is automatically updated with shipped and missed deliverables and the team are tagged to add feedback.
- 4th (Next Month) A final reminder is created automatically in #s_plan for final feedback.
- 5th (Next Month) The DRI concludes the retrospective.
Dogfooding Value Stream Analytics (VSA) in the Milestone Retrospective
To improve the retrospective data-driven experience, we are dogfooding VSA to simplify the data collection for the retrospective. This been done by automatically adding a link to the VSA of the current milestone filtered by group/stage to the retrospective. With Value stream analytics (VSA) our team is getting visibility to the lifecycle metrics of each milestone through the breakdown of the end-to-end workflow into stages. This allows us to identify bottlenecks and take action to optimize actual flow of work.
For example, for the review phase, we are using VSA to count the time between “Merge request reviewer first assigned" to “Merge request last approved at”.. With this data, we can identify:
- MRs that were bottlenecked due to limited reviewers/maintainers capacity.
- Slow review start times & Idle time post-approval.
- MRs with multiple feedback loops.
- Whether long review time originates from
same-team MR reviews
orout-of-team MR reviews
.
Please leave your feedback in this issue.
Concluding the Retrospective
The DRI is responsible for completing the following actions:
- Adding a comment to the retrospective issue summarizing actionable discussion items and suggesting corrective actions.
- Finding a DRI for each corrective action. Creating an issue in
gl-retrospectives/plan
for each is optional, but doing so and adding the ~“follow-up” label will ensure they’re included automatically in the next retrospective. - Recording a short summary video and sharing in #s_plan. This can be discussed in the next weekly team call and can be added to the Plan Stage playlist on Youtube so that it shows up on team pages.
- Closing the issue and making it public.
In both the summary comment and video the DRI should be particularly careful to ensure all information disclosed is SAFE. If the retrospective discussion contains examples of unSAFE information, the issue should not be made public.
Regressions
Regressions contribute to the impression that the product is brittle and unreliable. They are a form of waste, requiring the original (lost) effort to be compounded further with a fix or a reversion and reimplementation of the intended behavior.
Engineering Managers are strongly encouraged to conduct a simple Root Cause Analysis (RCA) when a regression takes place in a feature owned by their group, in order to:
- Inform the author and reviewers of the original MR that it caused a regression.
- Define corrective actions that might prevent or reduce the likelihood of a similar regression in future.
- Identify trends or patterns that can lead to human error.
The following RCA format was trialed in a FY23 Q2 OKR. It can be posted as a comment on the original MR when the regression has been successfully reverted.
**Description of the regression:**
_One-line description of the regression in behavior._
**Bug report:** _[Issue link]_
`@author`` (if internal) `@approvers` Please could you reply to this comment, copying the questions below and giving some short answers?
1. Were you aware this MR was reverted in the course of your normal work (e.g. through email notification, general work process)?
1. Did you identify the problematic behavior before approving this MR?
1. If not, what would've made the regression more obvious during review?
1. What changes to our tooling or review process would have prevented this regression from being merged?
1. Were the steps to test the MR mentioned clearly in the description? Were they easy to follow?
1. Do you have any other comments/suggestions?
Please reassure the participants that the purpose is not to apportion blame but to gather data, identify causal factors and implement corrective actions - but ask for a swift and brief response while the information is still fresh.
Technical Debt
The ~“technical debt” label, used in combination with ~“devops::plan,” helps track opportunities for improving the codebase. These labels should be applied to issues that highlight:
- improvements to existing code or architecture;
- shortcuts taken during development;
- features requiring additional refinement;
- any other items deferred due to the high pace of development.
For example, a follow-up issue to resolve non-UX feedback during code review should have the ~“technical debt” label.
Issues marked with this label are prioritized alongside those proposing new features and will be scheduled during milestone planning.
UX
The Plan UX team supports Product Planning, Project Management and Optimize. Product Planning and Project Management are focused on the work items architecture effort. This page focuses mainly on the specifics of how we support this, since it requires alignment and cross-group collaboration.
UX issue management, weights and capacity planning
Product Planning, Project Management and Optimize will create issues for UX work and pre-pend the title with [UX]. Here is an example - https://gitlab.com/groups/gitlab-org/-/epics/10224#note_1337213171+
- UX issues are the SSOT for design goals, design drafts, design conversation and critique, and the chosen design direction that will be implemented.
- Product requirement discussions should continue to happen in the main Issue or Epic as much as possible.
- When the Product Designer wants to indicate that the design is ready for ~“workflow::planning breakdown”, they should apply this label to their issue, notify the PM and EM, and close the issue.
When should a UX issue be used?
UX issues should be used for medium or large projects that will take more than one dev issue to implement (e.g., end-to-end flows, complicated logic, or multiple use cases / states that will be broken down by engineering into several implementation issues). If the work is small enough that implementation can happen in a single issue, then a separate [UX] issue is not needed, and the designer should assign themselves to the issue and use workflow labels to indicate that it’s in the design phase.
Weighting UX issues
All issues worked on by a designer should have a UX weight before work is scheduled for a milestone.
- Issue weights should follow the UX Department’s definitions.
- If the issue is a dedicated [UX] issue, then the issue weight can be added to the
weight
field, but it should also be duplicated as a ~‘design weight:" label. This is for UX Department planning purposes. For smaller issues where implementation and UX work happen in the same issue, UX weight should be added using the ~‘design weight:" label (theweight
field is used by engineering). - Product Managers and Product Designers can use issue weights to ensure the milestone has the right amount of work, to discuss tradeoffs, or to initiate conversations about breaking work into smaller pieces for high-weight items.
Work Items
When designing for objects that use the work items architecture we will follow this process intending to ensure that we are providing value-rich experiences that meet users needs. The work items Architecture enables code efficiency and consistency, and the UX team supports the effort by identifying user needs and the places where those needs converge into similar workflows.
About work items
The first objects built using the work items architecture support the Parker, Delaney and Sasha personas in tasks related to planning and tracking work. Additional objects will be added in the future, supporting a variety of user personas.
Read more about work items
Terminology
Work items refers to objects that use the work items architecture. You can find more terms defined related to the architecture here: work items terminology.
When we talk about the user experience, we avoid using the term ‘work items’ for user facing concepts, because it’s not specific to the experience and introduces confusion. Instead, we will use descriptors specific to the part of the product we’re talking about and that support a similar JTBD. Here are examples of how we are categorizing these:
- Team Planning Objects: Objects that belong to the Planning JTBD. Currently these are Epics, Issues and Tasks but could include others in the future.
- Strategy Objects: Objects that support strategic, organization wide objects. Currently these are Objectives and Key results.
- Development/Build Objects: Objects that support development tasks. These could be MRs, Test Cases, or Requirements
- Protecting Objects: These may include Incidents, Alerts, Vulnerabilities, Service Desk Tickets
This enables us to differentiate these by persona and workflow. While they may share a common architecture on the backend and similar layout on the frontend, in the UI they may:
- appear in different workflows and areas of the application
- have different data fields
- have different actions users can take on them
Guiding principles
- The DRI for the user experience is the Product Designer assigned to the group that is using the work item architecture for their object(s).
- We work in a user-first mindset, rather then technology-first.
- Pajamas is our design system and new patterns introduced via work item efforts need to solve a real problem that users have, be validated by user research, and follow the Pajamas contribution process.
- We follow Pajamas principles for the user experience.
- MVCs provide value to users, are bug-free and a highly usable experience, as described in Product Principles.
How the architecture is intended to work
When designing with the work items architecuture, Product Designers should understand roughly how the architecture works and what implications exist for the user experience.
- A work item has a type (epic, incident), and this controls which widgets are available on the work item and what relationships the work item can have to other work items and non-work item objects.
- The behavior of the work item in terms of performing its targeted JTBD(s) is powered by the collection of widgets enabled for a work item type.
- We want to avoid building logic or views specific to a type. When you need to support a workflow that isn’t currently supported, you can introduce new behaviors through widgets (fields, apps, actions). A practical example: Epics can parent other Epics and Issues. Instead of interconnecting epics and issues this behavior is encapsulated in a ‘hierarchy’ widget, which could be utilized in other work item types that implement hierarchies; such as Objectives and Key Results.
- Similarly, the work item view should not be customized directly for a type. However, the Product Designer can propose a different user experience and the team implementing the work item will incorporate the necessary use cases into the work items architecture.
- Work items can be organized and presented to users in any groupings from an IA/Nav standpoint so long as all views leverage the same SSoT grouping FE components (ex: list, board, roadmap, grid, …). We should only ever need to build and maintain one version of each grouping view that can then be re-used across anywhere we want to display that set of work items. Groupings are determined iteratively based on user needs.
If the quad discovers that the desired user experience would require a greater contribution to the work item architecture than initially thought, they would discuss trade-offs as a team in order to decide whether to proceed or leave the object separate.
Design Process for Work Items
Problem Validation
The quad that owns the code for the object (incident, epic, etc) decides if something should use the work item architecture based on trade-offs around code reuse and user experience. This should be a cross-functional decision, and the group Product Designer should advise their team regarding how well the user’s ideal workflow could or could not be supported by the work items architecture. This will allow the team to evaluate how much existing frontend pieces of the architecture could be re-used, and what would need to be added or customized in order to support the desired experience.
- As part of the decision making process, Product Designers should do problem validation user research (or leverage existing) to understand the desired user experience, including user goals, tasks, content/data field needs, and whether or not this work item type has relationships and the nature of those relationships.
- During this phase, the Product Designer and Product Manager should ensure that success metrics are defined per our work item research process (link TBD)
- High level wireframes should be produced to ensure everyone has a shared understanding of what is wanted and to establish a medium term vision for the work.
Solution Validation
After the quad decides the work item architecture is suitable, the Product Designer will design the experience in detail. As part of the detailed design, Product Designers, in collaboration with the quad, will:
- Design how existing widgets will be utilized, and any new widgets needed or if existing widgets could be abstracted to fit a new use case. For example: The Timeline widget for incidents was designed in isolation specific to the incident use case. It could be reworked slightly to support more use cases, such as objective or key result check-ins.
- Define how users will access this work item. Design how this work item will appear in existing views, such as lists, or any new views needed for this work item.
- Ensure new components and patterns are contributed back to Pajamas.
- Solution validation should be conducted as needed to ensure the workflow and usability meets the user needs.
Research Process for Work Items
We use the methods and tools in the UX Research handbook.
In addition to these, we’re working on gaining an efficiency bonus by using a common screener and building a mini-database of qualified participants aligned to our research needs.
We do a confidence check at different points in the process, particularly before moving a design into the build phase. Sometimes, a design solution is straightforward enough where we’re very confident to move ahead without solution validation. However, there are times when we’re unsure how the design solution will perform in production, thereby resulting in a low level of confidence. When this happens we will do usability testing to build confidence.
UX Paper Cuts
The UX Paper Cuts team has a dedicated role addressing Paper Cuts concerns within the Plan stage.
The UX Paper Cuts team member covering Plan will regularly triage the list of UX Paper Cuts issues that are related to the Plan stage as outlined above, but will also add actionable candidates to a Plan-specific epic for transparency.
Triaged Plan-specific Paper Cuts issues can be found in https://gitlab.com/groups/gitlab-org/-/epics/12061. Currently, quarterly child epics will be created under that parent epic to organize work.
See further details at https://handbook.gitlab.com/handbook/product/ux/product-designer/#suggesting-paper-cuts-to-the-team
Plan Weekly Digest
Background
In Plan we use async Weekly updates, called Plan Weekly digests, to communicate progress on important work this week to our team members.
The Engineering Managers in the Plan stage alternate each week as the DRIs. There are 3 groups in the Plan stage, and one SEM, so every EM is the DRI roughly once / 4 weeks.
The responsibility of the DRI is simply to ensure the issue is ready to be publicized in time for the coming week by reminding everyone to contribute. All team-members are welcome to participate in suggesting content using discussions or adding it directly by editing the description.
Process
- A new confidential issue is created every Monday, 8 UTC. (automatically)
- The issue is assigned to a Plan Engineering Manager according to the schedule below. Their role is to remind others to contribute.
- On Saturday, 8 UTC all team members are reminded to read the updates on the issue via a comment (automatically).
- On Friday, 8 UTC (next week) the issue is closed.
DRIs
Issue creation (auto) | DRI |
---|---|
2025-04-14 | John Hope |
2025-04-21 | John Hope |
2025-04-28 | Vladimir Shushlin |
2025-05-05 | Donald Cook |
2025-05-12 | John Hope |
2025-05-19 | Vladimir Shushlin |
2025-05-26 | Donald Cook |
2025-06-02 | John Hope |
2025-06-09 | Vladimir Shushlin |
2025-06-16 | Donald Cook |
2025-06-23 | John Hope |
2025-06-30 | Vladimir Shushlin |
2025-07-07 | Donald Cook |
2025-07-14 | John Hope |
2025-07-21 | Vladimir Shushlin |
2025-07-28 | Donald Cook |
2025-08-04 | John Hope |
2025-08-11 | Vladimir Shushlin |
2025-08-18 | Donald Cook |
2025-08-25 | John Hope |
Links
Meetings
Most of our group meetings are recorded and publicly available on YouTube in the Plan group playlist.
Weekly group meeting
Plan held a weekly team-meeting as a stage until 2023-11-01. The agenda is still available.
The meeting was removed as its functions are now covered in other ways:
- Slack
- Stage Working Groups
- Group meetings
- Smaller ad-hoc meetings
- Social call
Links / References
~group::project management
~group::product planning
~group::optimize
~group::knowledge
Shared calendar
There is a shared Plan stage calendar which is used for visibility into meetings within the stage.
- To add this shared calendar to your Google Calendar do one of the following:
- Visit this link (GitLab internal) from your browser.
- Click the ‘+’ next to ‘Other calendars’ in Google Calender, select ‘Subscribe to calendar’, paste
c_b72023177f018d631c852ed1e882e7fa7a0244c861f7e89f960856882d5f549a@group.calendar.google.com
into the form and hit enter.
- To add an event to the shared calendar, create an event on your personal calendar and add
Plan Shared
as a guest.
Team Day
Team Days are organized on a semi-regular basis. During these events we take time to celebrate wins since the last team day, connect with each other in remote social activities, and have fun!
Anyone can organize a team day. It starts with creating a Team Day planning issue in the plan-stage tracker and then proceeding to find a suitable date.
Setting a date
A time-boxed vote no more than 3 months but no less than 1 month out has proven to be the most inclusive way to set a date so far. This allows enough time to organize sessions but is usually close enough to avoid colliding with off-sites, or other company-wide activities.
Including at least three major timezones, one for each of AMER, EMEA, and APAC, in the issue description allows people to better see how the day will be divided for them and what they can attend.
It’s good practice to rotate the ‘base’ timezone of the Team Day to spread the opportunity for attendance. For example; the FY23-Q4 Team Day was based on a full UTC day, the FY24-Q3 on a full day AEST.
Sessions
The day is composed of sessions proposed and organized by team-members. These are typically allocated 1hr, though they can be longer or shorter. Sessions can be scheduled in advance to allow other team-members to plan their attendance and participation.
Sessions can be anything really, so long as it aligns with the values. Team-members can organize a game, teach a skill, give a talk on something they know, or anything else they think others might enjoy.
Some examples of sessions we’ve had on previous team days include:
- A cooking class with a former professional chef.
- Watching a holiday film together.
- Lateral Thinking Games.
- A home woodworking workshop tour and demonstration.
- Remote games; such as Gartic Phone and Drawsaurus.
Free time slots can be used on the day to hold impromptu events requiring little or no preparation.
Participation
Participation in team day is encouraged for any team-member or stable counterpart in Plan. If you collaborate with Plan team-members on a regular basis you’re also very welcome to attend.
Participating team-members are encouraged to drop non-essential work and take part in any sessions during the day that they wish to. Those assigned to essential work; such as critical bugs, incidents, or IMOC shifts, are encouraged to participate between their other obligations.
Team day is a normal workday for those choosing not to participate.
Expenses
Some sessions may require small purchases to participate fully; for example, ingredients for a cooking class or hosting of a private video game server.
Unless communicated in advance these are not expensable.
The DRI for organizing Team Day may pursue a budget for expenses under existing budgets; such as the team building budget. If successful it should be made clear to team-members well in advance:
- What purchases qualify for reimbursement.
- The policy the expense qualifies under; including handbook link, policy category, and classification in Navan.
- Any additional handbook guidance that will help team-members utilize the budget.
Past Team Days
Tips for a Successful Team Day
- Watch out for Daylight Savings Time when organizing for Q1 and Q3. When the date is set, check that the timeszones in the planning issue still match the timezones in use on the day (for example, AEST vs. AEDT).
- Secure expense budget and communicate at least a week in advance of the Team Day.
- Ensure Google Calendar events are transferred from the planning issue to the Plan Shared Calendar a week in advance of the event date.
- Ensure everyone has access to the calendar, and have easy step-by-step directions for creating a new event on the calendar (Adding events to a shared calendar can be slightly confusing).
- Communicate this change in SSOT, and encourage participants to add their own sessions in the calendar as free slots.
Team Process
Each group within the Plan stage follows GitLab’s product development flow and process. This allows for consistency across the stage, enables us to align with other stages and stable-counterparts, and enables us to clearly understand our throughput and velocity. We’re currently focused on strictly following the process stated in the handbook, as opposed to creating our own local optimizations.
In some cases we need to dogfood a new Plan feature that may adjust our adherence to the GitLab’s process. If that happens we assign a DRI responsible for setting the objective, reporting on the outcomes and facilitating feedback to ensure we prioritize improvements to our own product. This ensures we’re not making a change for the sake of making changes, and gives us clarity into our own evaluation of a change to the product. In some cases we need to dogfood a new Plan feature that may adjust our adherence to the GitLab’s process. If that happens we assign a DRI responsible for setting the objective, reporting on the outcomes and facilitating feedback to ensure we prioritize improvements to our own product. This ensures we’re not making a change for the sake of making changes, and gives us clarity into our own evaluation of a change to the product.
There are a couple of process-related improvements we’ll continue to adopt:
- Iterations: We’ve recently started organizing the prioritized work in a given milestone into weekly iterations. This doesn’t change any of the canonical process, and allows us to break a months worth of work into sizeable timeboxes. Intended outcome: Dogfood iterations (the feature), improve velocity and give more granular visibility into the progress of issues. DRI: @donaldcook
Stage Working groups
Like all groups at GitLab, a working group is an arrangement of people from different functions. What makes a working group unique is that it has defined roles and responsibilities, and is tasked with achieving a high-impact business goal fast. A working group disbands when the goal is achieved (defined by exit criteria) so that GitLab doesn’t accrue bureaucracy.
Stage Working Groups are focused on initiatives that require collaboration between multiple groups within the stage. The structure of stage working groups is similar to company-wide working groups, with DRI and well-defined roles. The initiatives are driven by a stage-level product direction rather than an Executive Sponsor, and can be formed of just Functional Leads and members who participate in fulfilling the exit criteria.
Active Stage Working Groups
Archived Stage Working Groups
Product Outreach
There can be a gap in understanding between Engineering and Product on a team. We are experimenting with a pilot program that will allow engineers to spend time in the world of Product, with the goal of greater mutual communication, understanding and collaboration. It helps us work more effectively as a team for better features.
Product Shadowing schedule
Engineering team-members can shadow a product stable-counterpart. Shadowing sessions last two working days, or the equivalent split over multiple days to maximize experience with different functions of the role. In particular, the session should include at least one customer call. To shadow a counterpart on the team:
- Create an issue in the plan project tracker using the
Product-Shadowing
template; - Create a WIP MR to this page to update the table below, adding your name and issue link, and
- When your counterpart is assigned to the issue, add their name, remove WIP status and assign to your manager for review.
Month | Engineering counterpart | Product counterpart | Issue link |
---|---|---|---|
2020-07 | Charlie Ablett (@cablett) | Keanon O’Keefe (@kokeefe) | gitlab-org/plan#118 |
2020-10 | Jan Provaznik (@jprovaznik) | Gabe Weaver (@gweaver) | gitlab-org/plan#185 |
Speed Runs
- Labels
- Issues
- Epics
- Requirements Management
Engineering Scaling Targets
We’re tracking a number of issues that we believe could cause scalability problems in the future.
Type | Description | Estimated Timeline for Failure | Resolution Due Date | 12 Month Target | Issue | Status |
---|---|---|---|---|---|---|
Redis Primary CPU | Unexpected load on the Shared State Redis instance caused by SUBSCRIBE, UNSUBSCRIBE and PUBLISH commands. | Unknown | November 2023 | 150k Concurrent WebSocket Connections at peak | Okay | |
Redis Memory | Retention of Action Cable messages in Redis Shared State memory due to high numbers of and/or stalled/hung clients. | Unknown | November 2023 | 150k Concurrent WebSocket Connections at peak | #326364 | Okay |
Primary DB | Scaling a combined ‘Work Items’ table consisting of all current issues, epics, requirements and test cases. | Unknown | November 2024 | 50k Work Items created per day | Okay |
Note: Work is ongoing on migration helpers to mitigate Int4 Primary Key Overflows. These will provide a standard way to resolve these issues.
Current Large Tables
Some tables in our database have grown significantly and may pose scalability issues.
Table Name | Current Size (Bytes) | Planned Mitigation Issue | Status |
---|---|---|---|
description_versions | 424.2 | https://gitlab.com/gitlab-org/gitlab/-/issues/412704 | Critical |
issues | 278.4 | https://gitlab.com/groups/gitlab-org/-/epics/10987 | Warning |
resource_label_events | 165.3 | https://gitlab.com/gitlab-org/gitlab/-/issues/412705 | Warning |
sent_notifications | 508.1 | https://gitlab.com/gitlab-org/gitlab/-/issues/417233 | Critical |
system_note_metadata | 173.5 | Warning |
By continually monitoring these tables and applying the planned mitigations, we aim to maintain optimal performance and prevent any scalability issues.
Metrics
Plan xMAU
Overview The Plan Frontend Team internship is the result of The Engineering Internship Pilot Program …
Plan:Knowledge team The Plan:Knowledge team develops Knowledge Management categories: Wiki GitLab …
Plan:Product Planning team The Plan:Product Planning team works on both the backend and frontend …
Plan:Project Management Team The Plan:Project Management team works on GitLab’s Project …