State-of-the-art AI assurance is often model- and domain-specific. We present two model-agnostic pipelines—Adversarial Logging Scoring Pipeline (ALSP) and Requirements Feedback Scoring Pipeline (RFSP)—that score explainability, safety, security, fairness, trustworthiness, and ethics. ALSP uses game-theoretic weighting, adversarial logging, and secret inversion to detect malicious inputs and quantify assurance. RFSP is user-driven: it gathers assurance weight preferences, segments data, and optimizes hyper-parameters (grid and Bayesian) to reflect desired goals. Both pipelines are validated on SCADA, telco, water, and telecom datasets, producing quantifiable assurance scores and surfacing trade-offs among AI goals.