Neil Grant, Partner, Gordons Partnership LLP
Until ratings were introduced in Adult Social Care in October 2014, CQC operated what, in essence, was a compliance system of regulation: services were either compliant with an Essential Standard or they were not. If they were not, the severity of the breach was judged as being major, moderate or minor. It wasn’t a perfect system of regulation (it never is) but at least it was clear. There was one system of assessment based on compliance, exercised under section 60 of the Health and Social Care Act 2008 (the 2008 Act).
It is no exaggeration to say that the introduction of ratings changed the whole way CQC regulates providers. Ratings were introduced by way of an amendment to section 46 of the 2008 Act. This created two systems of assessment – one performance based and one compliance based.
CQC could have introduced separate types of inspections – one for ratings and one for compliance – but there was no appetite for that given the obvious resource implications.
Alternatively, CQC could have created a rating scheme which included a rating of “compliant”. The old star rating system introduced by the Commission for Social Care Inspection did precisely that, with a rating of “satisfactory”.
CQC did neither of these things. Instead, it adopted a rating scheme that Ofsted had created, comprising four ratings: “inadequate”, “requires improvement”, “good” and “outstanding.” “Good” is – or is supposed to be – beyond compliance. However, as I suggest below, it seems likely that “good” is used in respect of services which are non compliant in some respects. Similarly – and as I have suggested in an earlier article – “requires improvement” carries with it an implication of non-compliance but is in fact used to refer both to services that are compliant and those that are not. There was no place at the main table for compliance. As a result the assessment framework is all about gathering “evidence” in order to make judgements on ratings.
This is unfortunate. Compliance requires clarity as to where the benchmark lies, investigative rigour and objective assessment, and decisions based on hard-edged, robust evidence. In contrast, ratings are susceptible to subjective and inconsistent interpretation and application. More insidiously, ratings judgements can be influenced by the personal bias of an inspector or their preoccupations in a way that compliance cannot, at least if it is exercised rigorously.
The demise of compliance is proven by the nebulous and flawed way in which inspectors are advised to go about making decisions on breach of regulations. In CQC’s Inspection guidance entitled Judgements and ratings (May 2018), it is stated at pages 5-6:
“You should use your judgement about whether to treat any incident as a breach of regulations in the normal way, taking into account the impact on people, the history of the service regarding any previous incidents and how it responded to them. Where an incident has a low impact, inspectors should consider the proportionate action to take – this may include making a recommendation – and reflect this in their report.”
Considerations about the impact on people and the history of compliance may well be relevant to a decision on what level of enforcement action (if any) ought to be used but they are not determinative of whether there is a breach of regulation at the time of an inspection.
The victory of ratings over compliance is confirmed by the fact that over 12,000 services or 80%+ of ASC services are rated “good” or “outstanding” which if one believes CQC’s rating system, means there are no breaches of legal requirements in any of these services. While I welcome the fact that the sector performs so well in the face of all the continuing pressures, it is difficult to believe that there are no compliance issues in such a large number of services. Indeed, I would go so far as to speculate that the vast majority of “good” services will have compliance issues of one form or another, and could be rated “requires improvement”, but that on CQC’s approach the key issue when deciding which rating to use is how they respond to those compliance issues.
In summary, it seems that what is happening is that inspectors are either not applying their minds to whether something is a breach or, if they are, they are applying a flawed test which enables them to classify breaches as concerns on the basis of subjective and irrelevant considerations. In addition, there is now a further pressure on an inspector not to identify a breach of regulation: if one is identified, the overall rating can be no better than “requires improvement”, even in the best of services
Rigorous assessments of compliance are limited to the minority of cases that proceed to a Management Review Meeting under the terms of CQC’s Enforcement Policy.
Of course, it is easy to criticise CQC without offering a potential solution. That is what most commentators do. So I will try to offer a way forward. First, CQC needs to put compliance at the centre of what it does. CQC needs to train its inspectors properly in relation to compliance and enforcement work.
Secondly, if a service is non-compliant, it should not automatically mean that it cannot be better than “requires improvement.” CQC should look at the particular breach and apply proportionality to its decision-making. A service should still be able to be rated “good” if the breach is, say, a one-off and relatively minor in nature. This will require CQC to overhaul its rating principles.
Thirdly, until that happens, CQC should be able to record a rating of compliant at the front of a report. This is particularly important for those services that are in the overall “requires improvement” band but have no identified breaches against them.
Lastly, CQC needs to be far clearer how it uses and weighs up information in rating services. If information is available before the publication of a report which proves real improvement, that should be included in the report. As CQC states in its guidance, How CQC monitors, inspects and regulates adult social care services (May 2018):
“The site visit is one element of the inspection, and information may be received after the site visit which we may report on.”
These are not academic points. An overall “requires improvement” rating will have a damaging impact on a business, not only affecting its capital value but also admissions to the service and thus its income stream which otherwise would be available for investment and service improvement. It will also affect the ability to recruit staff and will impact on risk rating as far as insurance renewal is concerned. The service will also not be re-inspected for a year after the report is published. It is no exaggeration to say that the financial pressures can have a crippling effect on a care home business, especially if it is a smaller home.
If my recommendations are actioned, I believe that CQC will start again to fulfil its statutory compliance functions across all regulated services. In so doing, it will improve consistency and accuracy of reporting and, more generally, the effectiveness and efficiency of regulation.
Neil would like to thank Michael Curtis QC of Crown Office Chambers and Claire Ferrari, Director at ProRisk Care Consultancy Limited for their assistance in the preparation of this article.
Neil is an expert in the regulation and funding of care services and only acts for providers. He has a wealth of experience in this field, having acted in the past for inspectorates and other public bodies at a very senior level.
Neil can offer timely and cost effective advice to care service providers faced with regulatory and commercial issues. If you require assistance in this specialist area, please contact Neil for an initial, confidential discussion on 01483 451900 or email him at firstname.lastname@example.org.
Gordons Partnership LLP is a respected and rapidly growing law firm based in London and Guildford and is a recognised leader in the Healthcare field.