ValidMind Library – ValidMind (original) (raw)

The ValidMind Library is a suite of developer tools and methods designed to automate the documentation and validation of your models.

Designed to be model agnostic, the ValidMind Library provides all the standard functionality without requiring you to rewrite any functions as long as your model is built in Python.

With a rich array of documentation tools and test suites, from documenting descriptions of your datasets to testing your models for weak spots and overfit areas, the ValidMind Library helps you automate model documentation by feeding the ValidMind Platform with documentation artifacts and test results.

To install the ValidMind Library:

To initialize the ValidMind Library, paste the code snippet with the model identifier credentials directly into your development source code, replacing this example with your own:

import validmind as vm

vm.init(
  api_host = "https://api.dev.vm.validmind.ai/api/v1/tracking/tracking",
  api_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  api_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  project = "<project-identifier>"
)

After you have pasted the code snippet into your development source code and executed the code, the Python Library API will register with ValidMind. You can now use the ValidMind Library to document and test your models, and to upload to the ValidMind Platform.

__version__

2.8.13

init

definit(project:Optional[str]=None,api_key:Optional[str]=None,api_secret:Optional[str]=None,api_host:Optional[str]=None,model:Optional[str]=None,monitoring:bool=False,generate_descriptions:Optional[bool]=None):

Initializes the API client instances and calls the /ping endpoint to ensure the provided credentials are valid and we can connect to the ValidMind API.

If the API key and secret are not provided, the client will attempt to retrieve them from the environment variables VM_API_KEY and VM_API_SECRET.

Arguments

Raises

init_dataset

definit_dataset(dataset:Union[pd.DataFrame, pl.DataFrame, np.ndarray, torch.utils.data.tensordataset],model:Optional[validmind.vm_models.VMModel]=None,index:Optional[Any]=None,index_name:Optional[str]=None,date_time_index:bool=False,columns:Optional[List[str]]=None,text_column:Optional[str]=None,target_column:Optional[str]=None,feature_columns:Optional[List[str]]=None,extra_columns:Optional[Dict[str, Any]]=None,class_labels:Optional[Dict[str, Any]]=None,type:Optional[str]=None,input_id:Optional[str]=None,__log:bool=True) → validmind.vm_models.VMDataset:

Initializes a VM Dataset, which can then be passed to other functions that can perform additional analysis and tests on the data. This function also ensures we are reading a valid dataset type.

The following dataset types are supported:

Arguments

Returns

Raises

init_model

definit_model(model:Optional[object]=None,input_id:str='model',attributes:Optional[Dict[str, Any]]=None,predict_fn:Optional[Callable]=None,__log:bool=True,**kwargs:Any) → validmind.vm_models.VMModel:

Initializes a VM Model, which can then be passed to other functions that can perform additional analysis and tests on the data. This function also ensures we are creating a model supported libraries.

Arguments

Returns

Raises

init_r_model

Initialize a VM Model from an R model.

LogisticRegression and LinearRegression models are converted to sklearn models by extracting the coefficients and intercept from the R model. XGB models are loaded using the xgboost since xgb models saved in .json or .bin format can be loaded directly with either Python or R.

Arguments

Returns

get_test_suite

defget_test_suite(test_suite_id:Optional[str]=None,section:Optional[str]=None,*args:Any,**kwargs:Any) → validmind.vm_models.TestSuite:

Gets a TestSuite object for the current project or a specific test suite.

This function provides an interface to retrieve the TestSuite instance for the current project or a specific TestSuite instance identified by test_suite_id. The project Test Suite will contain sections for every section in the project's documentation template and these Test Suite Sections will contain all the tests associated with that template section.

Arguments

Returns

log_metric

deflog_metric(key:str,value:float,inputs:Optional[List[str]]=None,params:Optional[Dict[str, Any]]=None,recorded_at:Optional[str]=None,thresholds:Optional[Dict[str, Any]]=None):

Logs a unit metric.

Unit metrics are key-value pairs where the key is the metric name and the value is a scalar (int or float). These key-value pairs are associated with the currently selected model (inventory model in the ValidMind Platform) and keys can be logged to over time to create a history of the metric. On the ValidMind Platform, these metrics will be used to create plots/visualizations for documentation and dashboards etc.

Arguments

preview_template

defpreview_template():

Preview the documentation template for the current project.

This function will display the documentation template for the current project. If the project has not been initialized, then an error will be raised.

Raises

defprint_env():

Prints a log of the running environment for debugging.

Output includes: ValidMind Library version, operating system details, installed dependencies, and the ISO 8601 timestamp at log creation.

reload

defreload():

Reconnect to the ValidMind API and reload the project configuration.

run_documentation_tests

defrun_documentation_tests(section:Optional[str]=None,send:bool=True,fail_fast:bool=False,inputs:Optional[Dict[str, Any]]=None,config:Optional[Dict[str, Any]]=None,**kwargs:Any) → Union[validmind.vm_models.TestSuite, Dict[str, validmind.vm_models.TestSuite]]:

Collect and run all the tests associated with a template.

This function will analyze the current project's documentation template and collect all the tests associated with it into a test suite. It will then run the test suite, log the results to the ValidMind API, and display them to the user.

Arguments

Returns

Raises

run_test_suite

defrun_test_suite(test_suite_id:str,send:bool=True,fail_fast:bool=False,config:Optional[Dict[str, Any]]=None,inputs:Optional[Dict[str, Any]]=None,**kwargs:Any) → validmind.vm_models.TestSuite:

High Level function for running a test suite.

This function provides a high level interface for running a test suite. A test suite is a collection of tests. This function will automatically find the correct test suite class based on the test_suite_id, initialize each of the tests, and run them.

Arguments

Returns

Raises

tags

deftags(*tags:str):

Decorator for specifying tags for a test.

Arguments

tasks

deftasks(*tasks:str):

Decorator for specifying the task types that a test is designed for.

Arguments

test

deftest(func_or_id:Union[Callable[..., Any], str, None]):

Decorator for creating and registering custom tests

This decorator registers the function it wraps as a test function within ValidMind under the provided ID. Once decorated, the function can be run using the validmind.tests.run_test function.

The function can take two different types of arguments:

The function should return one of the following types:

The function may also include a docstring. This docstring will be used and logged as the metric's description.

Arguments

Returns

log_text

deflog_text(content_id:str,text:str,_json:Optional[Dict[str, Any]]=None) → Dict[str, Any]:

Logs free-form text to ValidMind API.

Arguments

Returns

Raises

RawData

classRawData:

Holds raw data for a test result.

RawData

RawData(log:bool=False,**kwargs:Any)

Create a new RawData object.

Arguments

inspect

definspect(self,show:bool=True) → Optional[Dict[str, Any]]:

Inspect the raw data.

Arguments

Returns

serialize

defserialize(self) → Dict[str, Any]:

Serialize the raw data to a dictionary

Returns