7 issues detected
Ethical 2
Robustness 5

Your model seems to be sensitive to gender, ethnic, or religion based perturbations in the input data. These perturbations can include switching some words from feminine to masculine, countries or nationalities. This happens when:

  • Underrepresentation of certain demographic groups in the training data
  • Data is reflecting some structural biases and societal prejudices
  • Use of complex models with large number of parameters that tend to overfit the training data

To learn more about causes and solutions, check our guide on unethical behaviour.

Issues

2 medium
Feature `text` Switch countries from high- to low-income and vice versa Fail rate = 0.056 56/1000 tested samples (5.6%) changed prediction after perturbation 1000 samples affected
(8.1% of dataset)
Show details
Feature `text` Switch Religion Fail rate = 0.051 22/433 tested samples (5.08%) changed prediction after perturbation 433 samples affected
(3.5% of dataset)
Show details

Debug your issues in the Giskard hub

Install the Giskard hub app to:

  • Debug and diagnose your scan issues
  • Save your scan result as a re-executable test suite to benchmark your model
  • Extend your test suite with our catalog of ready-to-use tests

You can find installation instructions here.

from giskard import GiskardClient

# Create a test suite from your scan results
test_suite = results.generate_test_suite("My first test suite")

# Upload your test suite to your Giskard hub instance
client = GiskardClient("http://localhost:19000", "GISKARD_API_KEY")
client.create_project("my_project_id", "my_project_name")
test_suite.upload(client, "my_project_id")