To use the RIM Bot to auto-classify documents in the Document Inbox, an Admin must train and deploy a Trained Model. This training allows the machine learning model to learn from your inputs, preparing it to intelligently process data.

If you wish to update your Trained Model at any time (for example, to reflect new document types, or to attempt to improve your results), you must follow the process described here to train, evaluate, and deploy it.

How To Train a Model

Like all machine learning tools, the RIM Bot requires input to learn before performing tasks on its own. Generally, the larger and more accurate the inputs, the better the resulting model will be. Vault stores accumulated input in Trained Model object records.

Prediction Confidence

Vault uses a Prediction Confidence score to indicate how certain RIM Bot is that its prediction is correct. This value is between 0 (likely wrong) and 1 (likely correct). The better your inputs, the higher the Prediction Confidence will be. Vault stores Prediction Confidence scores in Prediction object records.

Prediction Confidence Threshold

Vault uses the Prediction Confidence Threshold field value on a Trained Model record to determine what score is required before the model can use that Prediction. For example, in the case of auto-classification, if the Prediction Confidence Threshold value is .95 and the Prediction Confidence for the document uploaded to the Document Inbox is .9728, Vault auto-classifies that document.

Creating a Document Classification Trained Model

Before creating a Trained Model, carefully consider the following limitations:

  • Vault allows Admins to train models in Pre-release or Sandbox environments using their production environment documents, verifying the training process. These models, however, cannot be moved to your production Vault, so Trained Models must be created and trained in the production environment as well.
  • Certain categories of documents cannot be auto-classified or used in model training. These include:
    • Audio and video files
    • Non-text files, such as ZIP files, statistical files, or database files
    • Non-English documents, however, you may use documents that are only partially in English for model training.
    • Documents where Vault cannot extract text, for example, if the text is too blurry.
  • We recommend using at least 3,000 documents in steady states, such as Approved or Final, to train the machine learning model. You may use RIM Bot on Vaults with 1,000-3,000 documents, however, this may limit the quality of your predictions.
  • If any inputs are misclassified documents, predictions may be negatively impacted. For example, if several documents that should have been classified as Regulatory > Correspondence > Approval Letter were classified as Regulatory > Correspondence > Agency Decisions, RIM Bot will be less confident about predictions for those document types.

Creating the Trained Model Object Record

  1. Navigate to Admin > Configuration > Document Fields and review your Vault’s configuration for the RIM Auto Classification and Tags fields. In order for users to observe the auto-classification process in their Document Inbox:
    • The Unclassified document type must use the RIM Auto Classification field.
    • Field-level security for the Tags field must be configured as Read Only or Editable.
  2. Navigate to Admin > Business Admin and click into the Trained Model object.
  3. Click Create.
  4. For the Trained Model Type, select Document Classification.
  5. Enter a Prediction Confidence Threshold.
    • RIM Bot will not use any predictions below this threshold for auto-classification. While Vault will accept any value between zero (0) and one (1), we recommend using a value of 0.85 or above.
    • Once you have sent a Trained Model for training, you cannot change this value.
    • Generally, the higher the number, the more accurate the classifications; however, you may auto-classify fewer documents. See more details on evaluation.
  6. If you intend to use the Training Window training method, set the Training Window Start Date accordingly.
  7. Under Model Parameters, set the Minimum Documents per Document Type.
    • Any document types with less than this minimum number of documents will not be able to be auto-classified. Higher minimums may yield better Prediction Confidence but will exclude more document types from auto-classification.
      • 1,000 to 10,000 documents = 10
      • 10,000 to 25,000 documents = 15
      • 25,000 to 50,000 documents = 25
      • 50,000 to 100,000 documents = 50
      • 100,000 to 150,000 documents = 75
      • 150,000 to 200,000 documents = 100
    • The Advanced Model Parameters field is system-managed; you do not need to set anything here.
  8. Click Save.

After creating the Trained Model object record, choose a training method.

Training Model Filters

If desired, you can choose to customize Trained Models by compiling custom lists of documents to use for training. There are two (2) possible methods.

Method 1: Attach CSV of Documents IDs

You can attach a CSV with Document IDs to your Trained Model. Vault evaluates this list of documents when you train the model so that it knows which set of documents to use to train the model.

To add a CSV to a Trained Model:

  1. Under Trained Model Artifacts, click Upload.
  2. Browse your computer and select your CSV file containing the desired document IDs.
  3. In the record’s Actions menu, select Train Model, then select Attached CSV of Documents IDs as the method Vault should use to pull the Document Set.

Once you train the model, Vault automatically sets the Training Set Type to List of Document IDs.

Method 2: VQL Query

You can add a custom VQL Query to your Trained Model. Vault evaluates this query when you train the model so that it knows which set of documents to use to train the model.

To add a custom VQL Query to a custom Trained Model:

  1. Under Document Criteria, enter your Document Criteria - VQL.
  2. Click Validate. Vault evaluates the VQL Query. Vault displays a green banner at the top of the screen if the query is valid. If the query is invalid, Vault displays an error message below the Document Criteria - VQL field.
  3. When your VQL query is valid, select Train Model from the record’s Actions menu, then select Training Window Start Date as the method Vault should use to pull the Document Set.

Once you train the model, Vault automatically sets the Training Set Type to Document Criteria.

Choosing a Document Set Method

To train your model, choose a method to pull documents to use as input in this Trained Model. There are two options: Training Window Start Date and Attached CSV of Document IDs.

Training Window Start Date

The Training Window Start Date method ignores Archived documents; this is a known issue. If you want to train on Archived documents, you must use the Attached CSV of Document IDs method.

This method pulls all documents in a Steady State, such as Approved or Final, with a Version Created Date value between the Training Window Start Date and the current date. If there are more than 200,000 documents that fit this criteria, Vault uses the 200,000 most recent documents. If you choose this method, ensure you have filled in a Training Window Start Date value on your Trained Model record.

Attached CSV of Document IDs

This method takes as input a list of Document IDs for Steady state documents. A Document ID is Vault’s unique identifier for a document allowing admins to tailor the list of documents used to train the Trained Model. While you can use any process that results in a list of Document IDs, the following steps create a report to get a list of Document IDs:

  1. Create a new report. Add filters to find the Steady state documents you wish to use.
  2. Add the Document ID field as a column.
  3. Run the report and export the results to CSV.
  4. Open the exported file. Change the name of the Document ID column to id.
  5. Save the file as documentset.csv.

Your CSV file cannot contain more than 200,000 Document IDs.

Using the Document ID method allows admins to select any documents to train the model. However, RIM Bot trains only on Steady state documents, and document IDs which are not in a Steady state at the time of training are ignored.

We also strongly recommend that the IDs provided include all document types that users may send to the Document Inbox. The RIM Bot will try to classify every document that comes into the Inbox, and if it didn’t learn certain document types, the document is likely to be misclassified.

Once you have created the documentset.csv, upload it as a Trained Model Artifact to your Trained Model record.

Creating Excluded Classifications

You can define classifications that will be excluded from your Trained Model. The RIM Bot excludes the specified classification(s) from all extraction, training, and testing during model deployment. Additionally, later predictions the RIM Bot makes are not actioned if a document is in (or predicted to be in) an excluded classification.

You can specify excluded classifications before or after a model is trained. If you add an excluded classification after the model’s training, the model is not automatically retrained. However, the RIM Bot does not take any action against documents of the excluded classification.

This exclusion applies only to the Trained Model to which the Excluded Classification belongs. If you create an Excluded Classification for a Trained Model which is no longer in use, you must re-define it for the currently-deployed model.

To create an excluded classification:

  1. Under Excluded Classifications, click Create.
  2. Select the Status of the Excluded Classification.
  3. Select the Classification you wish to exclude.
  4. Enter any relevant Comments.
  5. Click Save.

Training the Trained Model

Once you have determined the appropriate Document Set Method, perform the Train Model action. Choose the appropriate Document Set method when prompted, then click Start. The Trained Model record moves to the In Training state.

An asynchronous job tracks two activities as part of training:

  1. Document Extraction: During this process, the system collects the data from the specified document set. The output is a CSV file (document_extract_results.csv) in which an Admin can see which documents were able to be used as input and which were not attached under Trained Model Artifacts. Vault sends a notification to the Admin who started the action when the extraction is complete.
  2. Model Training: During this process, the system will use 80% of the extracted data to build a machine learning neural network model, then test that model using the remaining 20%. The output is a number of performance metrics in both the Trained Model Performance Metrics object and attached CSVs under Trained Model Artifacts. Vault sends a notification to the Admin who started the action when training is complete.

The time required to complete these jobs varies depending on the number of documents used as input: About 1 hour for Vaults training on 3,000 documents, to about 24 hours for Vaults training on 200,000 documents.

Once model training is complete, the Trained Model record moves to the Trained state.

Training a Trained Model in Pre-Release or Sandbox Environments with Production Data

You can train a Trained Model in your Pre-Release or Sandbox Vault with production documents for evaluation purposes. You cannot move the resulting Trained Model to your production environment.

Both methods for document selection are available. If you’re using the Attached CSV of Document IDs method, be sure to use Document IDs from your production Vault.

To train using production data, run the Train Model From Production Data action. This action is only visible in Pre-Release and Sandbox Vaults.

After evaluating your Trained Model, you’ll need to perform training again in your production Vault to begin using RIM Bot features there. If you use the Attached CSV of Document IDs method of document selection, you can use this same list of documents to create a similar Trained Model in your production environment.

Evaluating the Trained Model

Vault provides key metrics you can reference in the Trained Model record’s Training Summary Results field to evaluate your model: Extraction Coverage, Auto-classification Coverage, and Auto-classification Error Rate. See the definitions for these metrics and how to improve them.

Deploying the Trained Model

Once you have evaluated your Trained Model, select the Deploy Model action from the Trained Model record, review the prompt to ensure you agree with the outcome and click Start. The Trained Model record will move to the In Deployment state.

An asynchronous job tracks the deployment of this Trained Model in your Vault. The time required to complete these jobs varies, and it can take anywhere from 30 minutes to two (2) hours. Vault sends a notification to the Admin who performed the action when deployment is complete.

Once the deployment job finishes, the Trained Model record moves to the Deployed state and Vault begins auto-classifying the documents in the Document Inbox.

Only one (1) Trained Model per Trained Model Type can be deployed at a time.

Replacing a Deployed Trained Model

To replace a deployed model with a new Trained Model, simply deploy the new model. It replaces the currently active model, and auto-classification is not interrupted. This is the recommended method for replacing models.

Additional Trained Model Actions & Details

You can only have five (5) Trained Models per Trained Model Type. If you attempt to train a sixth, Vault advises you to archive a model before training another. To do so, select the Archive Model action on a Trained Model record. The Trained Model record moves to the Archived state. Archived models are not recoverable.

You can also remove deployed models and disable auto-classification by using the Withdraw Model action on a Trained Model in the Deployed state. Doing so moves the Trained Model record back to the Trained state.

About the Prediction Object

When a Trained Model is deployed and used to predict data for a document, the Prediction object keeps track of each individual prediction attempt. It’s unlikely that Admins will need to work with this object directly, but it may be useful to understand the object fields:

  • Prediction ID: Unique identifier for that prediction, automatically assigned by Vault
  • Related Record Unique ID: Identifier for the file being evaluated, automatically assigned by Vault
  • Related Record: Metadata for the document being evaluated, formatted as JSON. You can locate the Vault Document ID, Major version, and Minor version here if needed.
  • Predictions: The prediction data for this attempt from RIM Bot, formatted as JSON. You can use this field to understand if a prediction failed and why; which Trained Model was used to make the prediction; and, in the case of Document Classification, the first, second, and third top predictions from the model along with their Prediction Confidence scores. If the first Prediction score is above the deployed Trained Model Prediction Confidence Threshold, the document will have been auto-populated with that prediction. This can also be seen with the auto-populated JSON parameter.
  • Feedback: Post-prediction activity. This field shows the current value for the data being predicted in the trueValue JSON parameter and if that value matches the corresponding first Prediction in the Predictions field in the trueValueMatch JSON parameter.
  • Additional Details: Lists from where Vault generates the prediction. This can include multiple sources.

About the Prediction Metrics Object

When a Trained Model is deployed and used to predict data for a document, the Prediction Metrics object keeps track of the model’s performance over time. The Prediction Metrics job generates records that track the overall numbers, as well as document classification-specific performance.

You can view the following object fields from the Trained Model page layout:

  • Model Performance ID: Unique ID, assigned by Vault
  • Created Date: Date the prediction metric was calculated
  • Trained Model Type: The Trained Model Type being evaluated, for example Auto-Classification
  • Metric Type: Metric type presented
  • Metric Subtype: Subtype of the metric presented
  • Number of Documents: The number of documents used to test this model
  • Success Rate: The rate at which predictions on which the system acted were confirmed as true predictions (Correct Predictions divided by Number of Documents)
  • Correct Predictions: The number of times the prediction was accurate
  • Correct Predictions Above Threshold: The number of times the prediction was accurate and above the selected Trained Model Confidence Threshold
  • Predictions Above Threshold: The number of times the prediction was above the selected Trained Model Confidence Threshold