Inmate programs survey
The weighting control factor is equal to the number of inmates in a facility on the interview date divided by the number expected for that facility. Several of the very smallest prisons have a total inmate population that is smaller than the number to be sampled in each facility in a particular stratum. For example, if a sample prison contained 15 inmates in a stratum in which 55 were expected to be interviewed, there would be a shortage of inmates. The DCF is used to adjust for the workload shortfall in such prisons.
It is equal to the expected number of sample inmates in each facility in a stratum divided by the number of inmates in the prison on the date of the sample. In most prisons, the calculated DCF is less than one because the prison had more total inmates than the expected number in the sample for that stratum; in this case the DCF is set to 1. This factor was applied to adjust the weights to account for noninterviewed inmates.
The NIF was calculated as follows:. Inmate records, including noninterviewed inmates, were separated by gender, stratum, race Black, nonBlack , and age. If there were fewer than 30 unweighted cases in a cell, it was collapsed with those in the nearest age category. The OCRAF was used to adjust the weighted sample to reflect varying interview rates among inmates in different offense categories.
The OCRAF was computed separately for males and females for a number of different offense categories for State inmates and offense categories for Federal inmates. It was calculated as the weighted count of interview and noninterview thru the DCF divided by the weighted count for each stratum through application of the NIF.
CCRAF adjusts the weighted interviews by stratum level counts as of some specific date; this date varies by year. For the date specific to some collection year, see the codebook corresponding to that collection year. A sample survey has two possible types of errors: sampling and nonsampling. The accuracy of an estimate depends on both types of errors, but the full extent of the nonsampling error is unknown.
Consequently, one should be particularly careful when interpreting results based on a relatively small number of cases or small differences between estimates. They also partially measure the effect of some nonsampling errors in responses and enumeration, but do not measure systematic biases in the data.
Bias is the average over all possible samples of the differences between the sample estimates and the desired value. Nonresponse in the SISCF and SIFCF resulted from failing to obtain cooperation with sample prisons first stage nonresponse or failing to obtain completed interviews with sampled inmates second stage.
In the weighting of the sample, the NIF adjusted the weights for second stage nonresponse. The NIF was calculated based on gender, race, age and stratum. However, biases exist in the estimates to the extent that noninterviewed inmates have different characteristics from those of interviewed inmates in the same age-gender-ethnicity-stratum group. Total nonresponse for each survey includes both first and second stage nonresponse. This is due to differences in interviewer training and experience and in differing survey processes.
This is an example of nonsampling variability not reflected in the standard errors. Caution should be used when comparing results from different sources. Note on results based upon a small number of cases or small differences in estimates: When summary measures such as medians and percent distributions are computed on a base smaller than 5, for the SISCF and 1, for SIFCF, they probably do not reveal useful information because of the large standard errors involved.
In addition, nonsampling errors may result in small differences which may appear to be borderline significant, but are not really different. Sampling variability is variation that occurred by chance because a sample was surveyed rather than the entire population. Users interested in obtaining the restricted data must complete a Restricted Data Use Agreement, specify the reasons for the request, and obtain IRB approval or notice of exemption for their research.
A two-stage sampling procedure was used. Prisons were selected in the first stage. Inmates within sampled prisons were selected in the second stage. The overall response rates for state and federal inmates were They contain analysis variables corresponding to the Federal and State datasets.
The analysis variables are useful in replicating results reported by the Bureau of Justice Statistics. ICPSR also routinely creates ready-to-go data files along with setups in the major statistical software formats as well as standard codebooks to accompany the data. In addition to these procedures, ICPSR performed the following processing steps for this data collection:.
The public-use data files in this collection are available for access by the general public. One or more files in this data collection have special restrictions. Restricted data files are not available for direct download from the website; click on the Restricted Data button to learn more.
Please enable JavaScript in your browser. In a Jail? Whadaya, nuts? But it works! As I have traveled around visiting various jails and prisons, I sometimes run across a practice that I have not seen before; something cool; something that works better than what is typically done at other facilities. Of course, when I tell people about a jail inmate satisfaction survey, the typical response is incredulity. In a jail. And it works! You should consider doing this at your facility!
In the interest of full disclosure, I have long had a negative attitude toward patient satisfaction surveys. I have disliked and distrusted them. That is because I worked the first part of my career before I found my true calling in Correctional Medicine as an Emergency Physician. Hospitals love to do patient satisfaction surveys of ER patients—and that is fine.
The lower your satisfaction scores, the less you get paid! This puts ER docs in the uncomfortable role of pursuing satisfaction scores, often at the expense of good medicine. Or go ahead and give the antibiotic even though it is bad medicine because that is what the patient wants? Do I dare risk their wrath by declining to enable their addiction? Fortunately, this type of dysfunctional satisfaction survey is not what is being done at the Ada County Jail. I can say from experience that this a very well run jail on all accounts.
The inmate satisfaction surveys instituted by Sheriff Raney are but one example, albeit an outstanding one. The surveys are done at the Ada County Jail every two weeks. Around 40 inmates are selected randomly by a computer, though not all will take the time to respond. I feel safe at the Ada County Jail. The Ada County Jail is clean.
My basic health care needs are met 4. The food is good 5. The jail staff responds to my concerns 6. The jail staff is professional 7. The grievance process is fair 8. Do you have any other comments or concerns for us? There is no setting goals based on the numbers in the manner that I used to hate back in my ER days. Instead, the responses are used as a type of barometer of the mood of the jail.
0コメント