19 avril 2026
Launching a 360 feedback process is not just about sending out a questionnaire. The quality of the results depends first and foremost on one critical decision: who you invite to review the participant.
Launching a 360 feedback process is not just about sending out a questionnaire. The quality of the results depends first and foremost on one critical decision: who you invite to review the participant.
Too few reviewers, poorly chosen profiles, or an unbalanced panel can lead to vague, biased, or unhelpful feedback. On the other hand, a carefully selected group of reviewers provides a much more accurate picture of how behaviors are perceived in day-to-day work.
In this guide, you will learn who to invite to a 360 feedback, how many reviewers to include, how to build a balanced panel, and which mistakes to avoid if you want feedback that is truly useful for development.
Still deciding which format is right for your organization? Start here: 360 vs. 180 Feedback: what are the differences and how to choose?
Why reviewer selection matters so much in a 360 feedback process
The value of a 360 feedback process comes from the variety of perspectives it brings together. The same person will not display exactly the same behaviors with their manager, peers, direct reports, or internal clients.
That diversity of viewpoints is what makes it possible to assess competencies such as:
- communication
- listening
- collaboration
- influence
- leadership
- prioritization
- interpersonal effectiveness
A poorly designed reviewer panel distorts the results. For example:
- too many close colleagues may artificially soften the feedback;
- leaving out direct reports deprives a manager of crucial input on their leadership style;
- reviewers who do not know the participant well enough often provide superficial answers.
The right panel is not meant to be “kind” or “harsh.” It is meant to be representative.
Who should you invite to a 360 feedback? The main reviewer categories
1. The direct manager
In most cases, the direct manager is an essential reviewer. They can provide valuable insight into:
- decision-making
- prioritization
- goal achievement
- accountability
- alignment with role expectations
Recommendation: include 1 direct manager.
In matrix organizations, it may also make sense to add a second managerial perspective, such as a functional manager or a cross-functional project lead.
2. Peers
Peers are often best placed to assess day-to-day collaboration behaviors. They directly observe:
- communication quality
- teamwork
- influence
- conflict management
- reliability in cross-functional work
Recommendation: include 4 to 6 peers.
This is often the richest category in a 360 feedback process, provided you do not select only close colleagues.
Best practice: combine:
- peers from the same team or function;
- peers who work with the participant across projects or departments.
3. Direct reports
For anyone managing a team, direct reports are a critical source of feedback. They are best placed to assess:
- clarity of expectations
- delegation
- supportiveness
- listening
- recognition
- role modeling
- ability to help others grow
Recommendation: include at least 3 direct reports, ideally 4 to 6.
Below that threshold, perceived confidentiality drops significantly and responses may become overly cautious.
If the manager leads a very small team, it may be better:
- either not to open this category;
- or to merge it into a broader category such as “extended team” if your methodology allows it.
4. Internal or external clients
This category is especially useful for roles that involve strong service, coordination, or partnership dimensions. Internal or external clients can provide highly relevant feedback on:
- responsiveness
- service quality
- understanding of needs
- reliability
- partnership mindset
Recommendation: include 2 to 4 internal or external clients, depending on the role.
For some leadership roles, it may also be relevant to include strategic external partners.
5. Self-assessment
Self-assessment does not replace external feedback, but it is essential for interpreting the results. It helps identify:
- gaps between self-image and external perceptions;
- underestimated strengths;
- blind spots;
- key development priorities.
In a well-facilitated 360 feedback process, it is often the comparison between self-perception and external perception that triggers the most meaningful insights.
How many reviewers should you invite to a 360 feedback?
In most cases, a strong panel includes 10 to 15 reviewers in total.
A common distribution is:
- 1 self-assessment
- 1 to 2 managers
- 4 to 6 peers
- 3 to 6 direct reports, if relevant
- 2 to 4 internal or external clients, if the role requires it
Recommended minimum number
In practice, a 360 feedback process becomes fragile when the number of respondents is too low.
Useful benchmarks:
- fewer than 8 reviewers: results are often not robust enough;
- 10 to 15 reviewers: a good balance between diversity and readability;
- more than 20 reviewers: limited additional value, with greater complexity and sometimes lower response quality.
The goal is not to get “as many reviewers as possible,” but to gather enough varied perspectives to reveal reliable patterns.
How to choose the right reviewers: the criteria to use
Beyond categories, the quality of the panel also depends on which individuals you choose within each group.
1. Real knowledge of the participant’s work
A reviewer should have observed the participant in real work situations. They should be answering based on actual behavior, not on a vague general impression.
2. Sufficient frequency of interaction
It is better to choose someone who works with the participant regularly than someone senior or prestigious who has only limited exposure.
3. Enough history in the relationship
As a general rule, it is better to select people who have worked with the participant for at least 6 months. That gives them enough perspective to provide meaningful feedback.
4. Diversity of perspectives
A strong panel combines different types of working relationships. Avoid:
- panels made up only of allies;
- overly homogeneous groups;
- selections designed mainly to reassure the participant.
5. Ability to provide thoughtful input
A reviewer who is overloaded or only weakly engaged is more likely to rush through the questionnaire. It is often helpful to inform participants in advance so they understand the importance of their input.
Who should choose the reviewers?
There are three main models.
Model 1: the participant chooses alone
The participant selects all reviewers themselves.
Advantage: strong ownership of the process.
Limit: higher risk of selection bias.
Model 2: HR or the manager chooses alone
The panel is defined without input from the participant.
Advantage: greater apparent objectivity.
Limit: the process may feel imposed and generate less buy-in.
Model 3: participant, manager, and HR co-construct the panel
The participant suggests an initial list, which is then reviewed, validated, or adjusted by the manager and/or HR.
This is usually the most balanced model, because it combines:
- ownership of the process;
- control of selection bias;
- representativeness of the panel.
The most common mistakes to avoid when choosing reviewers
Selecting only supportive people
This is the most common mistake. The feedback may feel reassuring, but it will not be very useful. A 360 feedback process only creates value if it highlights nuances and perception gaps.
Leaving out direct reports for a manager
Without this category, the leadership diagnosis remains incomplete. Direct reports are often the best judges of whether managerial intent matches everyday behavior.
Inviting too few people in a given category
When a category includes only one or two people, respondents may fear they will not remain anonymous. That can strongly affect honesty.
Choosing reviewers who do not know the participant well enough
A questionnaire completed without meaningful observation adds very little value. The result is often a set of neutral or generic responses.
Mixing all categories together in the report
One of the strengths of a 360 feedback process is the ability to compare perceptions across groups. An aggregated report without segmentation removes an important layer of insight.
Neglecting communication beforehand
Reviewers should understand:
- why they were selected;
- how their responses will be used;
- what confidentiality protections are in place.
You can also share this supporting article in advance: 10 tips for responding well to a 360 feedback survey as a reviewer
Example of an ideal panel depending on the role
For a team manager
- 1 manager
- 4 to 5 peers
- 4 to 6 direct reports
- 1 self-assessment
- optionally 2 internal clients
For an individual contributor or expert
- 1 manager
- 5 to 6 peers
- 2 to 4 internal clients
- 1 self-assessment
For a senior leader
- 1 to 2 sponsors or managers
- 4 to 6 peers or executive committee members
- 3 to 5 direct reports
- 2 to 4 strategic partners
- 1 self-assessment
Checklist before validating the reviewer list
Before launch, make sure that:
- each reviewer truly knows the participant;
- each category is sufficiently represented;
- anonymity is protected;
- the panel is not made up only of close supporters;
- interactions are regular and meaningful;
- participants have been informed in advance.
In summary
Choosing the right reviewers for a 360 feedback process is a strategic step, not a minor administrative detail. To obtain feedback that is reliable, useful, and actionable, you need a panel that is:
- diverse
- balanced
- large enough
- made up of people who have genuinely observed the participant’s behavior at work
In most cases, aiming for 10 to 15 reviewers, selected through a co-construction process involving the participant, the manager, and HR, is a solid foundation for success.
FAQ – Who should you invite to a 360 feedback?
Should you always include the direct manager?
Yes, in most cases. The direct manager provides important insight into accountability, priorities, and overall role expectations. Exceptions may exist in very specific contexts, but they are uncommon.
How many peers should you invite?
In general, 4 to 6 peers is a strong benchmark. This gives you enough perspective while keeping the panel manageable.
Can you run a 360 feedback with very few reviewers?
It is technically possible, but rarely advisable. Below a certain threshold, reliability drops and anonymity may feel insufficient.
Should direct reports be included?
Yes, whenever the participant manages a team. Their perspective is often essential for assessing real leadership behavior.
Who should validate the final reviewer list?
In most cases, the best approach is a co-construction process involving the participant, the manager, and/or HR.
Further reading
- 360 Feedback: definition, process, and implementation
- 360 vs. 180 Feedback: what are the differences and how to choose?
- 10 tips for responding well to a 360 feedback survey as a reviewer
- Big Five personality test in the workplace
A strong 360 feedback platform should make it easy to manage these different reviewer categories, protect anonymity, and deliver segmented results so that collected perceptions can be translated into meaningful development actions.