Verifying the scientific computing stack is the initial step before executing any machine learning pipeline. A consistent environment prevents version conflicts during model development. The following script programmatically checks the installed versions of core dependencies:
import sys
import importlib
required_packages = {
'scipy': 'scipy',
'numpy': 'numpy',
'matplotlib': 'matplotlib',
'pandas': 'pandas',
'sklearn': 'scikit-learn'
}
print(f"Runtime: Python {sys.version.split()[0]}")
for module_name, pkg_name in required_packages.items():
try:
mod = importlib.import_module(module_name)
print(f"{pkg_name}: {mod.__version__}")
except ImportError:
print(f"{pkg_name}: Not installed")
Core data manipulation relies on structured arrays and tabular objects. NumPy handles numerical computations, while Pandas manages labeled datasets. Matplotlib provides the rendering engine for visual outputs. The example below demonstrates matrix initialization, DataFrame construction, and basic line rendering:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Initialize a 2D numerical grid
grid_values = np.array([[10, 20, 30], [40, 50, 60]])
row_labels = ['record_x', 'record_y']
col_labels = ['feature_a', 'feature_b', 'feature_c']
# Convert to tabular format
tabular_data = pd.DataFrame(grid_values, index=row_labels, columns=col_labels)
print(tabular_data)
# Render a basic line chart
fig, axis = plt.subplots(figsize=(6, 4))
axis.plot(grid_values, marker='o', linestyle='-')
axis.set_title("Feature Progression")
axis.grid(True)
plt.tight_layout()
plt.show()
Ingesting external datasets typical involves parsing comma-separated values. Pandas offers optimized I/O routines that handle headers, delimiters, and type inference automatically. When working with raw repositories, explicit column assignment ensures schema consistency:
import pandas as pd
dataset_uri = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
schema_columns = ['pregnancies', 'glucose', 'blood_pressure', 'skin_thickness',
'insulin', 'bmi', 'diabetes_pedigree', 'age', 'outcome']
raw_dataset = pd.read_csv(dataset_uri, header=None, names=schema_columns)
print(raw_dataset.sample(3))
Statistical profiling reveals underlying distributions and potential anomalies before model training. Key metrics include dimensionality, data types, central tendency, dispersion, and inter-feature correlation. Executing these checks sequentially builds a comprehensive data profile:
# Dimensionality and schema types
print(f"Dimensions: {raw_dataset.shape}")
print(raw_dataset.dtypes)
# Summary statistics and correlation matrix
statistical_summary = raw_dataset.describe()
correlation_matrix = raw_dataset.corr(numeric_only=True)
print(statistical_summary)
print(correlation_matrix['outcome'].sort_values(ascending=False))
Visual exploration complements numerical summaries by highlighting skewness, outliers, and multivariate relationships. Histograms display frequency distributions, boxplots identify quartiles and extremes, and scatter matrices reveal pairwise interactions across the feature space:
import matplotlib.pyplot as plt
import pandas as pd
# Distribution analysis
raw_dataset.hist(bins=15, color='teal', edgecolor='black', figsize=(10, 8))
plt.suptitle("Feature Distributions", y=1.02)
plt.tight_layout()
plt.show()
# Outlier detection via quartiles
raw_dataset.plot(kind='box', subplots=True, layout=(3, 3), figsize=(10, 8), color='darkorange')
plt.suptitle("Quartile Analysis", y=1.02)
plt.tight_layout()
plt.show()
# Pairwise relationships
pd.plotting.scatter_matrix(raw_dataset, alpha=0.6, figsize=(10, 10), diagonal='kde')
plt.tight_layout()
plt.show()
Raw datasets rarely meet algorithmic requirements directly. Preprocessing transforms features into standardized, model-compatible formats. Common transfromations include magnitude scaling, thresohld binarization, categorical encoding, and missing value imputation. Scikit-learn provides stateful transformers that learn parameters from training data and apply them consistently:
from sklearn.preprocessing import StandardScaler, Binarizer, OneHotEncoder
from sklearn.impute import SimpleImputer
import pandas as pd
import numpy as np
# Construct a mock dataset with missing values and mixed types
mock_df = pd.DataFrame({
'num_feat1': [1.2, np.nan, 2.1, 3.5],
'num_feat2': [0.5, 0.8, np.nan, 1.5],
'cat_feat': ['A', 'B', 'A', 'B']
})
# 1. Impute missing numerical entries with median
num_imputer = SimpleImputer(strategy='median')
imputed_nums = num_imputer.fit_transform(mock_df[['num_feat1', 'num_feat2']])
# 2. Standardize numerical range to zero mean and unit variance
scaler = StandardScaler()
scaled_nums = scaler.fit_transform(imputed_nums)
# 3. Binarize the second feature based on a threshold
binarizer = Binarizer(threshold=1.0)
binary_feat = binarizer.transform(imputed_nums[:, 1].reshape(-1, 1))
# 4. Encode categorical column
cat_encoder = OneHotEncoder(sparse_output=False, handle_unknown='ignore')
encoded_cats = cat_encoder.fit_transform(mock_df[['cat_feat']])
print("Scaled Numerical:\n", scaled_nums)
print("Binarized Feature:\n", binary_feat)
print("Encoded Categorical:\n", encoded_cats)
Applying these transformations systematically ensures that downstream estimators receive numerically stable inputs, accelerating convergence and improving generalization performance.