It isn't really possible for any of us players to say if this is a good move or not. Muse have kept the secret sauce to the matchmaker under wraps other than that the base rating uses the glicko2. I don't have access to this secret sauce either but if I had to guess they are probably using something like a generalized linear model with a Bernoulli distribution and a logit link function aiming to predict the match outcome with the independent variables the glicko2 ratings of the players on each side possibly with an interaction for player role, match type, etc. Maybe they have some non-linear terms as the game is co-operative. I expect a generalized linear model as the response variable is not something you can model very well with a standard regression or linear model, the Bernoulli distribution and logit link function because that are what you use when the response variable is binary (match outcomes are either wins or losses).
The stated objective of the matchmaker is to produce balanced games (this is used as a proxy for 'fun' games because measuring 'fun' is hard). This is a hard problem because the model is being asked to predict when it wont know things. Measuring how well a model predicts things is easy (just count how often it gets things right for various sophistication of the word count), but there is no standard procedure for determining if there exists a better model than the one being used which doesn't involve writing down some possible models and computing something like the AIC.
Adding terms to a model is risky, there is a chance of overfitting (when your model does great on past data and suck on future data because it was fitting to noise). If I had to guess this model already has a fair few terms in it, adding more is something that would need to be done very carefully.
Since we don't know what the current model is and don't have access to the data that was used to construct and validate it, we are not really capable of knowing what modifications, if any, could improve it. Maybe adding a term in something like exp(-N) where N is the number of games a player has is a good addition. Maybe a level multiplier would make the system predict better. Maybe non-linear terms in the glicko2 level aren't currently in the model and their addition will improve it.
We cant know and it is probably a good thing we don't know because if I knew the set of model equations I could easily game the system.
Now if folks have a novel idea for a variable to add to the matchmaker that's cool, but unless Muse are completely incompetent and have failed to consult a statistician I'm pretty sure they have considered levels, matches played, kill death ratios, average armour breaks per death and a whole host of other variables they could add to their model. The only thing we have sufficient knowledge to do here is express our like or dislike for the matchmaker from our personal experiences and suggest to Muse that it is something we want more time spent on (or for those who really hate it, ask that it be removed).