System-wide Data Completeness Q4 2015-16

Overall Data Completeness System-Wide
 
 


Data Completeness Trends

The chart above provides an overview of TBIN data completeness for the 4th Quarter of the 2015-16 fiscal year. Generally these scores are very good and you and your staff should be proud of the quality of our data. However, in order to continue improving we want to help by providing some general information on trends and issues that we are seeing system-wide.
  • The average of all program grades (blue) looks only at a program's overall grade with no consideration for how many clients were served or scores for individual questions. This measure is at the low range of an A (95-100%) for all three months [July, 95.8%; August, 95.7%; September, 95.3%]. During this quarter, the scores show a slight decrease in the average.
  • The client-level data (red) looks at the nulls for the required fields of all clients with an Entry/Exit or Service during this time frame instead of overall program scores. This measure shows a high B (90-94.99%) average for all three months [July, 93.8%; August, 93.9%; September, 93.7%]. There is no noticeable trend.

Both measures are valuable but they do highlight several important differences. The first measure considers program-level data which weights each program as the same as far as calculating the overall system-wide score. (Ex. Program A has a 95% and Program B has a 100% which results in a 97.5% average.) While not incorrect, this measure also results in a situation where a client served at a low volume program carries much greater weight in calculating the final grade than a client served in a high volume program.

Consider the previous example where the average was 97.5%. If Program A serves 100 clients and Program B only serves 10 clients, we have a real average of 95.45% ((100*.95+10*1)/110) based on the client level data. That is why we are also providing the client-level data average as well. Both measures can be important as they provide a different way of looking at the scores in TBIN.

Observed Problem Areas

The following are the issue areas where we are seeing significantly lower grades overall or questions that have nulls for 10% or more of all clients. By working to correct these issues we can improve our data completeness system-wide.
  • For Services Only providers (those that do not use the Entry/Exit workflow) the biggest issue that we are seeing is the lack of valid responses for the Destination question. We need to ensure that TBIN users are selecting an appropriate response for the Destination question that displays on the Services window. See the Service Destination guide on the TBIN Help Center by Clicking HERE!
  • For all providers we are seeing an issue with the Homeless questions displaying nulls between 10-15% of the time prior to the updates to the HUD Data Standards on October 1, 2016. The recent updates to the HUD UDEs and the change in the assessment within TBIN should help rectify this issue as the assessment will only display those questions that need to be answered for the current client based on responses to Residence Prior to Project Entry and Length of Stay, it is important that we make sure the correct answers are selected and the follow-up questions are also asked to improve the accuracy and completeness of data in TBIN.
  • For Entry/Exit providers we are seeing lower-than-average scores on the HUD Verification questions, with these registering nulls about 10% of the time. When selecting these options, make sure that all types of income, non-cash benefits, disabilities, or health insurance are answered. If you miss one of the options within the HUD Verification it will still register as a Null value. Another common error is failing to complete the HUD Verifications for other members of the household when required.
These issue areas may not be applicable to all programs, they are simply the commonly observed issues system-wide. 
 
 
Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.
Powered by Zendesk