There are several known issues with the current implementation of the canopy matchmaker. One of these has to do with the combination of the "apptology" and a greedy algorithm, which is currently being used, is not sufficient to always intelligently decide on which solution(s) should be launched and how they should be configured.
Some examples to illustrate the current issues with the matchmaker:
Another example is if e.g. app 1 does both "stickyKeys" and "slowKeys" and app 2 does both "slowKeys" and "debounceKeys". If the user wants both stickyKeys, slowKeys and debounceKeys, it is not clear what the matchmaker should do (except for perhaps launching both and siabling the slowKeys on one of the solutions). This can not be achieved with the current implementation, nor if we introduced a "canopy style" approach to selections of the solutions based on the apptology.
The concept/execution of the "apptology" is problematic because it is implemented as what appears to be a "property of capabilities" (that is, an ontology describing what things solutions can do) rather than a "property of solutions" which is what it is.The issue is that, say, "duplication of function" occurs because two solutions will attempt to meet a particular need/capability, and neither of them can have this capability configured away by configuring them. It is simply a "happy accident" (or an unhappy one) that this has turned up in the case of the application functions that we have run into up front, for example, that of being a screenReader, or that of being a magnifier. It is mostly accidental that the solutions that we happen to have do indeed behave this way with respect to these "application functions" - that is, they will meet them in a way which can't be configured away and so will annoy the user through this competition. You would think that "being a magnifier" could be configured away by selecting a magnification of 100% but in practiceit can't.
So, the "apptology" is a hack that is exposing the fact that not only our optimisation algorithm is faulty (the greedy/monotonic algorithm operated by the canopy matchmaker) but that its fitness landscape is drawn up in a faulty way too. We should instead be considering an "effectology" which is "the space of effects that the user is exposed to in a particular configuration of the solutions". We can't expect to optimise over this with a greedy algorithm because we need to test all possibilities of configuring, for example, each pair of solutions to see if either of them can be "configured away" to avoid conflicting with the other.
In the meantime we will carry on with the hack because of the "happy accident" mentioned above, and the fact that we have so few solutions and capabilities in practice it doesn't really matter that we are strongarming the algorithm for the time being.
To fix the problem, firstly we need to have a hugely richer solutions registry, including information about what effects a particular configuration of a solution will have. We had decided to do this using something like the model transformation framework - this will be encoded "as a mapping from the space of configuration to the space of effects". For example, the fact that the screen reader function of a screen reader application cannot be turned off would be encoded as a "literalValue" in its effectology mapping from its configuration to a document
The things that we call the "apptology" would then end up as possible keys in the "effectology" as well as all the other capability keys that we have. The difference would be that we have a model transforms document that encodes what the resulting "effectology document"looks like given a "configuration document" (that is, something resulting from a user's preferences set). We would then have a much more elaborate algorithm than the canopy algorithm that explored the space of possible configurations that we could use to meet these needs.
This might well be out of scope for all of APCP since it is a significant amount of work and we don't expect to have really so many solutions or capabilities that we can't mostly finesse them by continuing to hack new elements into the "apptology".
But we should be clear that the "apptology" is really a "function of the solutions that we have" rather than "a function of the capabilites in the world" and we can expect it to be an obstruction in the longterm to onboarding new solutions - since if a new solution gets integrate that has a set of capabilities that don't fit into our existing apptology, some central "apptology wallah" will need to be consulted in order to explain how (and if) the apptology can be hacked further to integrate it.
In practice, applications might turn out to be of various stereotypical types with mostly well-understood and mostly non-overlapping capabilities and we might be able to get away with it for an extended time.
The other risk with the full optimisation algorithm is that it will strongly impact our ability to do "lightweight" or realtime matchmaking based on, say, context changes, etc. It may turn out that the "BIG UP FRONT MATCHMAKING" model that we decided on for C4A may continue to serve us well, in that we may end up with an "expensive cloud-based or desktop-based matchmaker" after all, which we can't expect to run every 20 seconds on a low-powered device.