Report: Advancing the Home Care Workforce

local economies due to worker spending increasing, and significant cost savings for state and federal governments due to reductions in workers’ use of public assistance programs like Medicaid. 52 3.6. Limitations The programs described in this report show promise, yet significant research gaps have hindered the ability to demonstrate a solid evidence base for their effectiveness, making it difficult to convince employers and health systems that they should invest in such efforts. The first gap is that many of the programs included in this scan have not been evaluated. This includes programs or initiatives that may arguably demonstrate advanced role and upskilling best practices, such as Massachusetts Supportive Home Care Aides, Cooperative Home Care Associates, the value-driven QuILTSS (TN) workforce development program, and the Care Connections Senior Aides Project (NYC) (though the latter reports limited outcomes, no formal evaluative source is documented). Even when the programs were evaluated, outcome measures had weaknesses. They relied heavily on self-reported measures and qualitative analyses, even for wages, work hours, retention, and client health outcomes. And even for those self-reported measures, the studies primarily focus on process and short-term worker outcomes, such as numbers of trainees, training retention, training satisfaction, and learning gains measured upon program completion. More distal measures of program impact, like worker retention and patient health outcomes, were less frequently reported. Moreover, outcomes of interest to health care systems and payers regarding health care cost savings were limited to simulations. Lastly, the designs had limitations. They did not include an analysis of different components of the intervention (independent variables) that could be causing the effect. For example, they were unable to disaggregate basic training, soft skills, and upskilling components. Neither did they account for the significance of contextual and institutional factors, making results impossible to generalize to other settings. While a handful of studies did employ comparison groups, only one program evaluation randomized control group assignment. 32 Further, no comparisons of the alternative program approaches have been conducted using comparative effectiveness study designs.

17

Made with FlippingBook - Online Brochure Maker