600-800 short story

Short Paper assignments must follow these formatting guidelines: SINGLE SPACING with a blank line to separate your paragraphs. Provide a subheading in BOLD for each paragraph on a separate line that characterizes the content of that paragraph. There would be at least 4-5 separated paragraphs in your paper. Use 12-point Times New Roman font, one-inch margins, with discipline-appropriate citations. Word length requirement: 600-800 words. Based on the text (pages 183-223) and the article, The STRONG – Static Risk and Offender Needs Guide, answer the following:Assessments.com, in collaboration with the Washington Department of Corrections, developed and implemented a new, state-of-the-art, evidence-based risk and needs assessment/supervision planning system for adult offenders. This tool, the STRONG, is now being implemented in a number of counties in California as well. What are the advantages and features of existing Assessments.com systems? What is one of the most advanced features of the new tool? Why do you think the STRONG is being widely used? Do you agree or disagree with the new assessment?
strong_r.pdf

Unformatted Attachment Preview

Don't use plagiarized sources. Get Your Custom Essay on
600-800 short story
Just from $13/Page
Order Essay

615633
research-article2015
CJBXXX10.1177/0093854815615633Hamilton et al. / STRONG-R RECIDIVISM RISK ASSESSMENTCRIMINAL JUSTICE AND BEHAVIORCRIMINAL JUSTICE AND BEHAVIOR
Designed To Fit
The Development and Validation of the STRONG-R
Recidivism Risk Assessment
Zachary Hamilton
Alex Kigerl
Michael Campagna
Robert Barnoski
Washington State University
Stephen Lee
University of Idaho
Jacqueline van Wormer
Washington State University
Lauren Block
University of Nevada, Reno
Recidivism risk assessment tools have been utilized for decades. Although their implementation and use have the potential
to touch nearly every aspect of the correctional system, the creation and examination of optimal development methods have
been restricted to a small group of instrument developers. Furthermore, the methodological variation among common instruments used nationally is substantial. The current study examines this variation by reviewing methodologies used to develop
several existing assessments and then tests a variety of design variations in an attempt to isolate and select those which
provide improved content and predictive performance using a large sample (N = 44,010) of reentering offenders in
Washington State. Study efforts were completed in an attempt to isolate and identify potential incremental performance
achievements. Findings identify a methodology for improved prediction model performance and, in turn, describe the development and introduction of the Washington State Department of Correction’s recidivism prediction instrument—the Static
Risk Offender Need Guide for Recidivism (STRONG-R).
Keywords:
assessment; validation; recidivism; customization; risk
O
ver the past 30 years, the criminal justice system has witnessed an increase in the use of
actuarial risk assessments to predict recidivism and to allow for structured organizational
decision making. Offender risk assessments now assist in determining custody levels, guide
Authors’ Note: A technical report of a portion of this work was presented to the Washington State
Department of Corrections (Hamilton, Neuilly, Lee, & Barnoski, 2014). The findings and discussion are those
of the authors and may not represent the position of the Washington State Department of Corrections.
Correspondence concerning this article should be addressed to Zachary Hamilton, Assistant Professor,
Department of Criminal Justice & Criminology, Director, the Washington State Institute for Criminal Justice
(WSICJ), Washington State University, SAC 403K, Spokane, WA 99210; e-mail: zachary.hamilton@wsu.edu.
CRIMINAL JUSTICE AND BEHAVIOR, 2016, Vol. 43, No. 2, February 2016, 230­–263.
DOI: 10.1177/0093854815615633
© 2015 International Association for Correctional and Forensic Psychology
230
Hamilton et al. / STRONG-R RECIDIVISM RISK ASSESSMENT
231
contact standards, and determine intervention priority/eligibility. Actuarial risk tools are
now a standard part of how criminal justice professionals make decisions, occurring in both
the adult and juvenile system (Bushway, 2013; National Center for Juvenile Justice, 2006).
In the United States and Canada, it is becoming improbable that an offender would evade an
assessment of risk following conviction. Despite their influence, due to the relatively small
group of researchers involved in creating recidivism assessments (e.g., Andrews & Bonta,
1995; Baird, 1981; Barnoski & Drake, 2007; Brennan & Oliver, 2000; Duwe, 2014; Hare,
1991; Latessa, Smith, Lemke, Makarios, & Lowenkamp, 2009), examinations of development methods and procedures are limited. Kroner, Mills, and Reddon (2005) argued that this
is likely due to the near decade of research and development needed to take an instrument
from inception to validation. We contend that this has unfortunately led to restricted knowledge and a relatively limited critique. In addition, others have suggested (Bushway, 2013;
Ridgeway, 2013) that there is a lack of comparative testing of important sources of variation,
which limits practitioners’ knowledge base when adopting a prospective tool.
Generally speaking, risk assessments consist of algorithms of various complexities, using
empirically predictive indicators of behavior (Falzer, 2013). From clinical judgment (1G), to
static only (2G), to static and dynamic (3G), and finally the development of responsive (4G)
instruments, offender assessment has been classified into four generations (see Andrews, Bonta,
& Wormith, 2006).1 Each generation adds a nuanced dimension that improves an instrument’s
functionality (Baird, 2009). However, classifying assessments is not cut-and-dry, as a newer
generation does not always provide increased performance. Furthermore, some instruments
claim robust applicability to a variety of populations (e.g., Level of Service Inventory-Revised
[LSI-R], Ohio Risk Assessment System [ORAS]) and others target specific jurisdictions or special populations (e.g., Minnesota Screening Tool Assessing Recidivism Risk [MnSTARR]).
Despite decades of use and study, the evaluation of instrument performance is still misunderstood by many practitioners, policy makers, and researchers. Furthermore, when the criteria of
validation can be achieved by demonstrating predictive performance that is slightly better than
a coin flip (or more commonly known as betting the base rate),2 possessing a validated instrument can be a relatively low bar for an agency to hang their hat on. The field is moving forward,
and translation research is needed to guide practitioners away from underperforming models and
toward those with greater predictive strength (Gottfredson & Moriarty, 2006).
The current study sought to examine offender risk assessment development variations.
Our efforts attempted to addresses three objectives. First, we identify the concepts and
issues that vary the core provisions of risk assessment modeling and prediction. Specifically,
we discuss issues of contemporary recidivism risk assessments commonly used in North
America. Although not an exhaustive list, the issues described are the subject of current discussion and testing. Next, we use data from the Washington State Department of Corrections
(WADOC) to empirically examine how predictive performance is influenced by these issues.
Finally, we isolate and compare instruments and development design decisions, extending
the discussion of methodological variation. The empirical comparisons were made in an
effort to develop an offender recidivism prediction instrument that takes advantage of the
identified development methods, which resulted in improved predictive performance.
Methodological Design Issues Of Current Risk Assessments
Often practitioners adopt an existing risk assessment off-the-shelf that may have been
previously developed in another state or country. That is, one has to start somewhere and
232
Criminal Justice and Behavior
starting with an existing instrument often makes the most sense for an agency. Alternatively
a state or agency may develop their own assessment, gathering items that are tailored to
populations and outcomes in which they intend to predict. In this scenario, the agency must
work with an experienced research team and cast a wide net of items to include in an assessment pool and to be gathered on its offenders. The implementing agency must then collect
the detailed assessment data and allow research partners to craft an instrument that is tailored to suit the needs of its assessors and practitioners. Conceivably, if proper methods of
development are adhered to at each stage, the instrument should perform better than an
off-the-shelf instrument, which lacks item and outcome tailoring and localized context provided by the agency crafted prediction models (Wright, Clear, & Dickson, 1984). Currently,
many state’s correctional systems utilize customized instruments, including: Minnesota
(Duwe, 2014), Georgia (Meredith, Speir, & Johnson, 2007), and Texas (http://www.tdcj.
state.tx.us/) to highlight a few.
Although a jurisdiction-specific or customized assessment has great appeal for an agency,
there are many developmental design decisions that must be made during the development
process, which have the potential to alter, or potentially improve, the prediction of the
recidivistic outcomes. In 2008, Washington State began the data collection process to
develop its own 4G model risk assessment. In 2012, we began the design and development
process, attempting to utilize modern statistical techniques and agency input to maximize
prediction strength and functionality. When developing a tool for Washington State, we
aimed to achieve two primary goals: (a) select highly predictive items and (b) create predictive models that will be stable over time.
As customized assessments often do not get the attention and advocacy as more nationally renowned tools, there is a tendency to label them less than state of the art and possessing limited functionality. The current research describes our risk assessment development
process, creating a state of the art offender assessment that is customized to meet the needs
of Washington State. Using Washington State as an example, an intended purpose of this
study was to describe methodological variations that may influence the performance of all
offender assessments, in an effort to provide translational research or as Bushway (2013)
suggested, give attention to the methodological issues that can affect criminal justice practice. Although more are likely to exist, we sought to describe and test five potential performance impacting issues that can be observed in a variety of instruments used today, including
(a) static versus dynamic items, (b) item selection, (c) item weighting, (d) gender responsivity/specificity, and (e) specified outcomes.
Static Versus Dynamic Items
As indicated, the movement from 2G to 3G instruments added dynamic predictors of
recidivism with an emphasis on offender needs. Conceptually, when static and dynamic
instruments are utilized in the same instrument, these tools are often referred to as riskneeds assessments; where static risks (e.g., prior number of convictions) are combined with
dynamic needs (e.g., employed in the prior 6 months). As previously discussed by Baird
(2009), a distinction can be observed between the objectives of correctional practitioners
and psychometricians. When assessing an objectively measurable concept like recidivism,
a manifest outcome exists and prediction items of any and all types may be used in an effort
to predict said outcome. Prediction items may be assessed for their ability to predict recidivism via simple bivariate methods, regression approaches, or more recently, the use of
Hamilton et al. / STRONG-R RECIDIVISM RISK ASSESSMENT
233
machine learning models have been attempted (Hamilton, Neuilly, Lee, & Barnoski, 2014;
Oliver, Dieterich, & Brennan, 2014).
However, when attempting to predict a latent concept like substance abuse needs a variety
of psychometric approaches are often used, such as factor analysis, structure equation modeling, and item response theory to name a few. These analyses are completed in an effort to
assess intervention and treatment prioritization for a given needs area/domain. Instruments,
such as the LSI-R, have made attempts to combine these efforts, using items to predict manifest recidivism outcomes, while also dividing items into latent domains that may be used for
needs assessment. Although rarely discussed, it is imperative that a distinction be drawn
between risk and needs assessments; where the risk assessments make use of any and all item
types to predict an observed recidivism outcome and may do so without the need for latent
variable approaches. Accordingly, static and dynamic risk and needs items can jointly assist
in recidivism prediction. The current study focuses exclusively on risk prediction.
When included in a multivariate model, however, static criminal history items often
reduce the impact of dynamic needs items, resulting from issues related to shared variance
(see Barnoski & Aos, 2003). Nevertheless, prior findings have generally indicated that
dynamic items provide a unique contribution, improve prediction strength, and allow for
the possibility of offender risk to decrease over time (Cottle, Lee, & Heilbrun, 2001; Jung
& Rawana, 1999; Loerber & Farrington, 1998). The inclusion of dynamic items is therefore
necessary if agencies wish to identify reductions in risk.
Item Selection
Instruments are to be composed of items that predict an outcome of interest. The Risk–
Need–Responsivity (RNR) model indicates that items are to possess an empirical relationship
with recidivism and are thus criminogenic (Andrews & Bonta, 2010). While the importance
of psychometrics and latent properties of domains and subscales can be debated, agencies are
most concerned with the instrument’s strength of prediction. Therefore, when predicting risk
of recidivism, all item types are fair game, including static, dynamic, and any other ethically
and theoretically relevant measure. The field has witnessed the use of a mix of clinical experience, bivariate, and multivariate techniques for determining risk scale item inclusion. At a
minimum, a significant bivariate association with recidivism is needed to identify an empirical relationship, or to make a determination that the item is a criminogenic predictor. Items
lacking this distinction will not add to the prediction of recidivism and may reduce the instrument’s strength, creating prediction noise (Baird, 2009). Multivariate assessments provide a
more stringent criterion for item inclusion. Accounting for issues related to shared variance
and multicollinearity, regression and other multivariate techniques utilize model assumptions
to establish items of importance, removing those which may have a bivariate relationship but
fail to affect prediction after accounting for other included measures. A debate within the field
has suggested that liberal selection criteria allow for the inclusion of tertiary items and domains
(i.e., free time/leisure activities) that divert attention from the core criminogenic measures
driving recidivism prediction (Baird, 2009; Wooditch, Tang, & Taxman, 2014).
Item Weighting
A debate among instrument developers is the utility of multivariate models to provide
greater weight to important items and, in turn, improve predictive performance. Researchers
234
Criminal Justice and Behavior
using analytic weights seek to improve prediction performance by ranking variables by
relative import. As studies have indicated (J. Austin, Coleman, Peyton, & Johnson, 2003;
Barnoski & Aos, 2003), when predicting recidivism, measures such as criminal history and
age are strong predictors, whereas, although still important, measures such as alcohol use
and education attained are relatively weaker. Unweighted, or Burgess weighted,3 models
commonly utilize bivariate significance to identify variable importance and treat measures
equally, resulting in a simple summation of predictor scoring, where Burgess weighted tools
sum a series of dichotomous items (0/1) and the more generic unweighted tools provide
single-unit increases for each increasing risk response (0, 1, 2, etc.). Although unweighted
methods may assure that items predict in a theoretically consistent direction at a bivariate
level, the direction of effects represents a black box on a multivariate level, a perceived
disadvantage. Furthermore, these methods create redundancy, potentially over weighting
items’ importance as a result of shared variance.
The importance of weights has been debated, with some suggesting that weights provide
little performance improvement and are more susceptible to performance shrinkage (Dawes,
1979; Grann & Långström, 2007; Harris, Rice, & Quinsey, 1993; Wainer, 1976), whereas
others suggest substantial improvement gained when samples are sufficiently large (Einhorn
& Hogarth, 1975; Silver, Smith, & Banks, 2000). The use of bivariate selection procedures
over multivariate methodologies increases the likelihood of including items that are either
weakly, or even negatively, associated with the target outcome. Therefore, without an optimal weighting method it is possible that many items utilized in unweighted tools dilute
accuracy, and create prediction noise (Baird, 2009). One of the best illustrations of this
concept was completed by Kroner and colleagues’ (2005) use of randomly selected items
(drawn from a coffee can) of four unweighted instruments, in which randomly formed models provided near equivalent performance as their more established counterparts. Additional
examinations have demonstrated that the use of multivariate item selection and analytically
weighted items in 2G assessments have been shown to outperform 4G Burgess scored
instruments (J. Austin et al., 2003; Barnoski & Aos, 2003).
The practical argument against weighting is that it complicates scoring and face validity
for practitioners, which often makes computer automation necessary. However, with the
increased use of automation and agency data system integration, this once pragmatic argument is losing ground.
Gender Specificity
Van Voorhis, Wright, Salisbury, and Bauman (2010) were instrumental in identifying the
theoretical need for assessments to be separated by gender. Aside from the fact that certain
items have been shown to be more predictive for female than male offenders (Andrews
et al., 2012; Smith, Cullen, & Latessa, 2009), there is a logical argument and empirical
evidence that the two genders represent separate populations (Else-Quest, Higgins, Allison,
& Morton, 2012). Creating gender-neutral assessments restricts considerations of gendered
distinctions as they relate to system rehabilitative practices and institutional culture. Genderneutral risk assessments may further limit a clinician’s ability to develop individual treatment plans (Hannah-Moffat, 2009). Indeed, practitioners prioritize management in a
gendered manner (Britton, 2003; Freiburger & Hilinski, 2010; Frohmann, 1997; Kruttschnitt
& McCarthy, 1985; Miller, 1999; Spohn, Beichner, & Davis-Frenzel, 2001) due to variant
Hamilton et al. / STRONG-R RECIDIVISM RISK ASSESSMENT
235
pathways men and women take toward criminality (Blanchette & Brown, 2006; Brennan,
Breitenbach, Dieterich, Salisbury, & Van Voorhis, 2012; Daly, 1992, 1994; Salisbury & Van
Voorhis, 2009; Sampson & Laub, 1993). Segregated by gender, incarceration is an obvious
way the criminal justice system deals with females differently than males.
Three methods to make an instrument gender-specific are described here. First, an instrument can be created and scored as gender neutral but manually adjust risk category cut
points so that fewer female offenders score as high risk. Second, an instrument may utilize
gender as a predictor, or risk assessment item, encapsulating all gender variations in a single
measure. A third method, discussed here as gender-specific, selects and weights prediction
items for the separate gender subsamples. Beyond the potential improved predictive performance, gender-specific assessments provide item context and description that can assist in
case management and, in turn, improve face validity and responsivity. Finally, genderspecific assessments start with women in mind, using items and scales that are formatted
specifically to address the criminal pathways and needs of female offenders.
Specified Outcomes
One of the biggest practical problems in designing a risk assessment is measuring and
defining the recidivistic outcome to be predicted, as there may be more than one, or it may
differ by jurisdiction. First, one must identify a source of data to evaluate the recidivism outcome of interest. While some states provide recidivism outcomes at the state level, some states
do not have an integrated system, making risk assessment development more dif …
Purchase answer to see full
attachment

GradeAcers
Calculate your paper price
Pages (550 words)
Approximate price: -

Why Work with Us

Top Quality and Well-Researched Papers

We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university or professional, and we will assign a writer who has a respective degree.

Professional and Experienced Academic Writers

We have a team of professional writers with experience in academic and business writing. Many are native speakers and able to perform any task for which you need help.

Free Unlimited Revisions

If you think we missed something, send your order for a free revision. You have 10 days to submit the order for review after you have received the final document. You can do this yourself after logging into your personal account or by contacting our support.

Prompt Delivery and 100% Money-Back-Guarantee

All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. In case you cannot provide us with more time, a 100% refund is guaranteed.

Original & Confidential

We use several writing tools checks to ensure that all documents you receive are free from plagiarism. Our editors carefully review all quotations in the text. We also promise maximum confidentiality in all of our services.

24/7 Customer Support

Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.

Try it now!

Calculate the price of your order

Total price:
$0.00

How it works?

Follow these simple steps to get your paper done

Place your order

Fill in the order form and provide all details of your assignment.

Proceed with the payment

Choose the payment system that suits you most.

Receive the final file

Once your paper is ready, we will email it to you.

Our Services

No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services.

Essays

Essay Writing Service

No matter what kind of academic paper you need and how urgent you need it, you are welcome to choose your academic level and the type of your paper at an affordable price. We take care of all your paper needs and give a 24/7 customer care support system.

Admissions

Admission Essays & Business Writing Help

An admission essay is an essay or other written statement by a candidate, often a potential student enrolling in a college, university, or graduate school. You can be rest assurred that through our service we will write the best admission essay for you.

Reviews

Editing Support

Our academic writers and editors make the necessary changes to your paper so that it is polished. We also format your document by correctly quoting the sources and creating reference lists in the formats APA, Harvard, MLA, Chicago / Turabian.

Reviews

Revision Support

If you think your paper could be improved, you can request a review. In this case, your paper will be checked by the writer or assigned to an editor. You can use this option as many times as you see fit. This is free because we want you to be completely satisfied with the service offered.

Order your essay today and save 15% with the discount code DISCOUNT15