Bring-Your-Own-Task (BYOT)¶
Date created: 2024/08/15
Last Modified: 2024/08/15
Description: Create custom task for HeartKit end-to-end
Overview¶
In this notebook, we provide a complete walkthrough of creating a custom task. To keep things simple, we will create a task that will predict heart rate from raw ECG signal.
Below we outline the high-level steps to create a custom task:
- Identify datasets and create corresponding dataloaders (e.g. PTB-XL)
- Create data pipeline for training, validation, and test sets
- Implement task routines for modes: train, evaluate, export and optionally demo.
In this example, we will implement only train and evaluate modes.
Datasets
- PTB-XL: The PTB-XL is a large publicly available electrocardiography dataset. It contains 21837 clinical 12-lead ECGs from 18885 patients of 10 second length. The ECGs are sampled at 500 Hz and are annotated by up to two cardiologists.
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import random
from typing import Generator
from collections.abc import Iterable
from pathlib import Path
import tempfile
import keras
import heartkit as hk
import physiokit as pk
import tensorflow as tf
import numpy as np
import numpy.typing as npt
import neuralspot_edge as nse
import matplotlib.pyplot as plt
2024-08-16 15:31:46.467589: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-08-16 15:31:46.475433: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-08-16 15:31:46.477772: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
# Be sure to set the dataset path to the correct location
os.environ['HK_DATASET_PATH'] = os.getenv('HK_DATASET_PATH', './datasets')
plot_theme = hk.utils.dark_theme
nse.utils.silence_tensorflow()
_ = hk.utils.setup_plotting(plot_theme)
1. Create Dataloaders¶
We will create a dataloader class for the dataset PTB-XL since it provides heart beat locations via blabels
.
Given a raw ECG signal, we will compute the heart rate given the beat locations in the frame. The rate will be calculated based on the RR intervals using PhysioKit. The output will be the ecg signal and the heart rate in beats per second.
class PtbxlDataloader(hk.HKDataloader):
def __init__(self, ds: hk.datasets.PtbxlDataset, **kwargs):
"""Dataloader for PTB-XL to generate HeartRateTask data."""
super().__init__(ds=ds, **kwargs)
def patient_data_generator(
self,
patient_id: int,
samples_per_patient: int,
):
# Compute input size (might be different due to sampling rate)
input_size = int(np.ceil((self.ds.sampling_rate / self.sampling_rate) * self.frame_size))
with self.ds.patient_data(patient_id) as h5:
ecg = h5["data"][:]
# Beat locations. Convert 100Hz to ds.sampling_rate
blabels = h5["blabels"][:, 0]*(self.ds.sampling_rate/100.0)
# END WITH
for _ in range(samples_per_patient):
# Select random lead and frame location
lead = random.choice(self.ds.leads)
frame_start = np.random.randint(0, ecg.shape[1] - input_size)
frame_end = frame_start + input_size
# Compute BPM by selecting beats within frame, computing RR intervals and averaging
frame_blabels = blabels[(blabels >= frame_start) & (blabels < frame_end)]
rri = pk.ecg.compute_rr_intervals(frame_blabels)
bpm = 60.0 / (np.nanmean(rri) / self.ds.sampling_rate)
# Extract ecg frame
x = ecg[lead, frame_start:frame_end].copy()
# Resample if needed
if self.ds.sampling_rate != self.sampling_rate:
x = pk.signal.resample_signal(x, self.ds.sampling_rate, self.sampling_rate, axis=0)
x = x[:self.frame_size] # Ensure frame size
x = np.nan_to_num(x).astype(np.float32)
x = x.reshape(-1, 1)
y = bpm / 60.0 # Make beats per second
yield x, y
# END FOR
def data_generator(
self,
patient_ids: list[int],
samples_per_patient: int | list[int],
shuffle: bool = False,
) -> Generator[tuple[npt.NDArray, npt.NDArray], None, None]:
if isinstance(samples_per_patient, Iterable):
samples_per_patient = samples_per_patient[0]
for pt_id in nse.utils.uniform_id_generator(patient_ids, shuffle=shuffle):
for x, y in self.patient_data_generator(pt_id, samples_per_patient):
yield x, y
# END FOR
# END FOR
Visualize output of dataloader¶
We will grab a single sample from the dataloader and visualize the output.
ds = hk.DatasetFactory.get("ptbxl")(
path=Path(os.environ['HK_DATASET_PATH']) / "ptbxl"
)
dl = PtbxlDataloader(
ds=ds,
frame_size=4000,
sampling_rate=500,
)
patient_ids = np.random.permutation(ds.patient_ids)
x, y = next(dl.data_generator(patient_ids=patient_ids, samples_per_patient=1))
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
ax.plot(x, label=f"HR: {60*y:0.0f} BPM")
ax.set_title("ECG Frame")
ax.legend()
fig.show()
Register dataloaders to factory¶
We will then create a simple DataloaderFactory to ease the creation of dataloaders based on dataset names.
DataloaderFactory = nse.utils.create_factory(factory="BYOT.DataloaderFactory", type=hk.HKDataloader)
DataloaderFactory.register("ptbxl", PtbxlDataloader)
2. Create Data Pipeline¶
We will create a data pipeline that will be used to train and evaluate the model. For each dataset, we will:
- Load the dataset via
hk.DatasetFactory
- Split the dataset patients into training and validation sets
- Load the corresponding dataloader via
hk.DataloaderFactory
and create atf.data.Dataset
for training and validation
Once each dataset has a pair of training and validation datasets, we will combine them into a single training and validation dataset. At this point we will then extend the pipeline by adding the following:
- Shuffle the dataset (if training)
- Batch the dataset
- Apply augmentations/preprocessing (if any)
- Prefetch the dataset
Lastly, for the validation set will cache which will (1) speed up the evaluation process and (2) ensure that the same validation set is used for each epoch with fixed size.
def create_data_pipeline(
ds: tf.data.Dataset,
sampling_rate: int,
batch_size: int,
buffer_size: int | None = None,
augmentations: list[hk.NamedParams] | None = None,
) -> tf.data.Dataset:
"""Transforms a dataset into a pipeline with augmentations.
Args:
ds(tf.data.Dataset): Input dataset.
sampling_rate(int): Sampling rate of the dataset.
batch_size(int): Batch size.
buffer_size(int | None): Buffer size for shuffling.
augmentations(list[hk.NamedParams] | None): List of augmentations to apply.
Returns:
tf.data.Dataset: Augmented dataset
"""
if buffer_size:
ds = ds.shuffle(
buffer_size=buffer_size,
reshuffle_each_iteration=True,
)
if batch_size:
ds = ds.batch(
batch_size=batch_size,
drop_remainder=True,
num_parallel_calls=tf.data.AUTOTUNE,
)
augmenter = hk.datasets.create_augmentation_pipeline(
augmentations,
sampling_rate=sampling_rate
)
ds = (
ds.map(
lambda data, labels: {
"data": tf.cast(data, "float32"),
"labels": tf.cast(labels, "float32"),
},
num_parallel_calls=tf.data.AUTOTUNE,
)
.map(
augmenter,
num_parallel_calls=tf.data.AUTOTUNE,
)
.map(
lambda data: (data["data"], data["labels"]),
num_parallel_calls=tf.data.AUTOTUNE,
)
)
return ds.prefetch(tf.data.AUTOTUNE)
def load_train_datasets(
datasets: list[hk.HKDataset],
dataloaderFactory: nse.utils.ItemFactory[hk.HKDataloader],
params: hk.HKTaskParams,
) -> tuple[tf.data.Dataset, tf.data.Dataset]:
"""Loads training and validation datasets.
Args:
datasets(list[hk.HKDataset]): List of datasets to load.
dataloaderFactory(nse.utils.ItemFactory[hk.HKDataloader]): Factory to create dataloaders.
params(hk.HKTaskParams): Task parameters.
Returns:
tuple[tf.data.Dataset, tf.data.Dataset]: Training and validation datasets.
"""
# This will load each dataset/dataloader, split subjects, and merge into single tf.data.Dataset
train_ds, val_ds = hk.tasks.utils.load_train_dataloader_split(datasets, params, factory=DataloaderFactory)
# Create training and validation pipelines
train_ds = create_data_pipeline(
ds=train_ds,
sampling_rate=params.sampling_rate,
batch_size=params.batch_size,
buffer_size=params.buffer_size,
augmentations=params.augmentations + params.preprocesses,
)
val_ds = create_data_pipeline(
ds=val_ds,
sampling_rate=params.sampling_rate,
batch_size=params.batch_size,
augmentations=params.preprocesses,
)
# Cache validation dataset
val_steps_per_epoch = params.val_size // params.batch_size if params.val_size else params.val_steps_per_epoch
val_steps_per_epoch = val_steps_per_epoch or 50
val_ds = val_ds.take(val_steps_per_epoch).cache()
return train_ds, val_ds
3. Create task routines¶
We will create a task that will predict heart rate from raw ECG signal. The task will have the following modes:
- train: Train the model
- evaluate: Evaluate the model
def train(params: hk.HKTaskParams):
"""Train model
Args:
params (hk.HKTaskParams): Training parameters
"""
os.makedirs(params.job_dir, exist_ok=True)
logger = nse.utils.setup_logger(__name__, level=params.verbose, file_path=params.job_dir / "train.log")
logger.debug(f"Creating working directory in {params.job_dir}")
params.seed = nse.utils.set_random_seed(params.seed)
logger.debug(f"Random seed {params.seed}")
with open(params.job_dir / "train_config.json", "w", encoding="utf-8") as fp:
fp.write(params.model_dump_json(indent=2))
params.num_classes = 1 # Regression
feat_shape = (params.frame_size, 1)
datasets = [hk.DatasetFactory.get(ds.name)(**ds.params) for ds in params.datasets]
train_ds, val_ds = load_train_datasets(
datasets=datasets,
dataloaderFactory=DataloaderFactory,
params=params
)
y_true = np.concatenate([y for _, y in val_ds.as_numpy_iterator()])
y_true = np.argmax(y_true, axis=-1).flatten()
inputs = keras.Input(shape=feat_shape, name="input", dtype="float32")
# Load existing model
if params.resume and params.model_file:
logger.debug(f"Loading model from file {params.model_file}")
model = nse.models.load_model(params.model_file)
params.model_file = None
else:
logger.debug("Creating model from scratch")
if params.architecture is None:
raise ValueError("Model architecture must be specified")
model = hk.ModelFactory.get(params.architecture.name)(
inputs=inputs,
params=params.architecture.params,
num_classes=params.num_classes,
)
# END IF
flops = nse.metrics.flops.get_flops(model, batch_size=1, fpath=params.job_dir / "model_flops.log")
t_mul = 1
first_steps = (params.steps_per_epoch * params.epochs) / (np.power(params.lr_cycles, t_mul) - t_mul + 1)
scheduler = keras.optimizers.schedules.CosineDecayRestarts(
initial_learning_rate=params.lr_rate,
first_decay_steps=np.ceil(first_steps),
t_mul=t_mul,
m_mul=0.5,
)
optimizer = keras.optimizers.Adam(scheduler)
loss = keras.losses.MeanSquaredError()
metrics = [
keras.metrics.MeanAbsoluteError(name="mae"),
keras.metrics.MeanSquaredError(name="mse"),
keras.metrics.R2Score(name="rsq"),
]
if params.model_file is None:
params.model_file = params.job_dir / "model.keras"
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
logger.debug(f"Model requires {flops/1e6:0.2f} MFLOPS")
model_callbacks = [
keras.callbacks.EarlyStopping(
monitor=f"val_{params.val_metric}",
patience=max(int(0.25 * params.epochs), 1),
mode="max" if params.val_metric == "f1" else "auto",
restore_best_weights=True,
verbose=min(params.verbose - 1, 1),
),
keras.callbacks.ModelCheckpoint(
filepath=str(params.model_file),
monitor=f"val_{params.val_metric}",
save_best_only=True,
mode="max" if params.val_metric == "f1" else "auto",
verbose=min(params.verbose - 1, 1),
),
keras.callbacks.CSVLogger(params.job_dir / "history.csv"),
]
history = model.fit(
train_ds,
steps_per_epoch=params.steps_per_epoch,
verbose=params.verbose,
epochs=params.epochs,
validation_data=val_ds,
callbacks=model_callbacks,
)
logger.debug(f"Model saved to {params.model_file}")
nse.plotting.plot_history_metrics(
history.history,
metrics=["loss", metrics[0].name],
save_path=params.job_dir / "history.png",
title="Training History",
stack=True,
figsize=(9, 5),
)
# Summarize results
rst = model.evaluate(val_ds, return_dict=True)
logger.info(f"[VAL SET] " + ", ".join(f"{k.upper()}={v:.4f}" for k, v in rst.items()))
def evaluate(params:hk.HKTaskParams):
"""Evaluate model
Args:
params (HKTaskParams): Evaluation parameters
"""
os.makedirs(params.job_dir, exist_ok=True)
logger = nse.utils.setup_logger(__name__, level=params.verbose, file_path=params.job_dir / "test.log")
logger.debug(f"Creating working directory in {params.job_dir}")
params.seed = nse.utils.set_random_seed(params.seed)
logger.debug(f"Random seed {params.seed}")
datasets = [hk.DatasetFactory.get(ds.name)(**ds.params) for ds in params.datasets]
_, test_ds = load_train_datasets(
datasets=datasets,
dataloaderFactory=DataloaderFactory,
params=params
)
test_x = np.concatenate([x for x, _ in test_ds.as_numpy_iterator()])
test_y = np.concatenate([y for _, y in test_ds.as_numpy_iterator()])
logger.debug("Loading model")
model = nse.models.load_model(params.model_file)
logger.debug("Performing inference")
rst = model.evaluate(test_ds, verbose=params.verbose, return_dict=True)
logger.info("[TEST SET] " + ", ".join([f"{k.upper()}={v:.2%}" for k, v in rst.items()]))
y_pred = model.predict(test_x)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
ax.scatter(y_pred*60, test_y*60)
ax.set_title("Predicted vs True BPM")
ax.set_xlabel("Predicted BPM")
ax.set_ylabel("True BPM")
ax.annotate(f"R2={rst['rsq']:.2f}", xy=(0.05, 0.95), xycoords='axes fraction')
fig.tight_layout()
fig.show()
fig.savefig(params.job_dir / "bpm_plot.png")
Create HeartRateTask class and register it to the factory¶
class HeartRateTask(hk.HKTask):
@staticmethod
def train(params: hk.HKTaskParams):
train(params)
@staticmethod
def evaluate(params: hk.HKTaskParams):
evaluate(params)
@staticmethod
def export(params: hk.HKTaskParams) -> None:
raise NotImplementedError("Export not implemented")
@staticmethod
def demo(params: hk.HKTaskParams) -> None:
raise NotImplementedError("Demo not implemented")
hk.TaskFactory.register("heartrate", HeartRateTask)
for task_name in hk.TaskFactory.list():
print(task_name)
rhythm beat segmentation diagnostic denoise foundation translate heartrate
4. Let's test out the new task!¶
First we will create a task configuration with the following features:
- Frame Size: 8 seconds
- Dataset: PTB-XL
- Model:
EfficientNetV2
with 4 MBConv blocks each depth 1 - Batch Size: 256
- Buffer Size: 20,000
- Learning Rate: 1e-3
- Preprocess: Z-score normalization
params = hk.HKTaskParams(
name="BYOT-HR",
job_dir=Path(tempfile.gettempdir()) / "hk-byot-hr",
verbose=1,
datasets=[
hk.NamedParams(
name="ptbxl",
params=dict(
path=Path(os.environ["HK_DATASET_PATH"]) / "ptbxl",
),
),
],
frame_size=4000, # 8 seconds
sampling_rate=500, # 500Hz
samples_per_patient=5,
val_samples_per_patient=5,
val_patients=0.2,
val_size=10000,
batch_size=256,
buffer_size=20000,
epochs=100,
steps_per_epoch=50,
lr_rate=1e-3,
lr_cycles=1,
val_metric="loss",
preprocesses=[
hk.NamedParams(
name="layer_norm",
params=dict(
epsilon=0.01,
name="znorm"
),
),
],
augmentations=[],
architecture=hk.NamedParams(
name="efficientnetv2",
params=dict(
input_filters=8,
input_kernel_size=[1, 9],
input_strides=[1, 2],
blocks=[
{"filters": 16, "depth": 1, "kernel_size": [1, 9], "strides": [1, 2], "ex_ratio": 1, "se_ratio": 2},
{"filters": 24, "depth": 1, "kernel_size": [1, 9], "strides": [1, 2], "ex_ratio": 1, "se_ratio": 2},
{"filters": 32, "depth": 1, "kernel_size": [1, 9], "strides": [1, 2], "ex_ratio": 1, "se_ratio": 2},
{"filters": 40, "depth": 1, "kernel_size": [1, 9], "strides": [1, 2], "ex_ratio": 1, "se_ratio": 2}
],
output_filters=0,
include_top=True,
use_logits=True
),
),
)
task = hk.TaskFactory.get("heartrate")
Train the model¶
task.train(params)
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1723822309.458226 626139 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723822309.478233 626139 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723822309.478319 626139 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723822309.479299 626139 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723822309.479375 626139 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723822309.479420 626139 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723822309.529037 626139 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723822309.529123 626139 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 I0000 00:00:1723822309.529179 626139 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
Epoch 1/100
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1723822323.031206 626304 service.cc:146] XLA service 0x7471ec02c0b0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: I0000 00:00:1723822323.031237 626304 service.cc:154] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
5/50 ━━━━━━━━━━━━━━━━━━━━ 1s 34ms/step - loss: 1.8535 - mae: 1.2239 - mse: 1.8535 - rsq: -20.3693
I0000 00:00:1723822328.955479 626304 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
50/50 ━━━━━━━━━━━━━━━━━━━━ 17s 55ms/step - loss: 1.4214 - mae: 1.1194 - mse: 1.4214 - rsq: -16.4093 - val_loss: 0.7500 - val_mae: 0.8469 - val_mse: 0.7500 - val_rsq: -8.4336 Epoch 2/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 37ms/step - loss: 0.4457 - mae: 0.6133 - mse: 0.4457 - rsq: -4.2974 - val_loss: 0.0573 - val_mae: 0.1936 - val_mse: 0.0573 - val_rsq: 0.2792 Epoch 3/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 4s 87ms/step - loss: 0.0722 - mae: 0.2064 - mse: 0.0722 - rsq: 0.0739 - val_loss: 0.0206 - val_mae: 0.0998 - val_mse: 0.0206 - val_rsq: 0.7403 Epoch 4/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 5s 92ms/step - loss: 0.0411 - mae: 0.1526 - mse: 0.0411 - rsq: 0.4376 - val_loss: 0.0178 - val_mae: 0.0924 - val_mse: 0.0178 - val_rsq: 0.7758 Epoch 5/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 46ms/step - loss: 0.0388 - mae: 0.1436 - mse: 0.0388 - rsq: 0.5175 - val_loss: 0.0161 - val_mae: 0.0889 - val_mse: 0.0161 - val_rsq: 0.7972 Epoch 6/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0316 - mae: 0.1323 - mse: 0.0316 - rsq: 0.5965 - val_loss: 0.0173 - val_mae: 0.0953 - val_mse: 0.0173 - val_rsq: 0.7820 Epoch 7/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 36ms/step - loss: 0.0287 - mae: 0.1265 - mse: 0.0287 - rsq: 0.6291 - val_loss: 0.0152 - val_mae: 0.0857 - val_mse: 0.0152 - val_rsq: 0.8089 Epoch 8/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0266 - mae: 0.1204 - mse: 0.0266 - rsq: 0.6571 - val_loss: 0.0115 - val_mae: 0.0709 - val_mse: 0.0115 - val_rsq: 0.8550 Epoch 9/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0251 - mae: 0.1161 - mse: 0.0251 - rsq: 0.6806 - val_loss: 0.0116 - val_mae: 0.0734 - val_mse: 0.0116 - val_rsq: 0.8537 Epoch 10/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0225 - mae: 0.1125 - mse: 0.0225 - rsq: 0.7156 - val_loss: 0.0104 - val_mae: 0.0672 - val_mse: 0.0104 - val_rsq: 0.8689 Epoch 11/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0235 - mae: 0.1115 - mse: 0.0235 - rsq: 0.7163 - val_loss: 0.0094 - val_mae: 0.0626 - val_mse: 0.0094 - val_rsq: 0.8821 Epoch 12/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0206 - mae: 0.1054 - mse: 0.0206 - rsq: 0.7445 - val_loss: 0.0088 - val_mae: 0.0573 - val_mse: 0.0088 - val_rsq: 0.8890 Epoch 13/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0209 - mae: 0.1055 - mse: 0.0209 - rsq: 0.7386 - val_loss: 0.0083 - val_mae: 0.0559 - val_mse: 0.0083 - val_rsq: 0.8951 Epoch 14/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0194 - mae: 0.1026 - mse: 0.0194 - rsq: 0.7589 - val_loss: 0.0095 - val_mae: 0.0653 - val_mse: 0.0095 - val_rsq: 0.8807 Epoch 15/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0201 - mae: 0.1032 - mse: 0.0201 - rsq: 0.7443 - val_loss: 0.0079 - val_mae: 0.0561 - val_mse: 0.0079 - val_rsq: 0.9010 Epoch 16/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0179 - mae: 0.0993 - mse: 0.0179 - rsq: 0.7775 - val_loss: 0.0071 - val_mae: 0.0524 - val_mse: 0.0071 - val_rsq: 0.9104 Epoch 17/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0176 - mae: 0.0985 - mse: 0.0176 - rsq: 0.7787 - val_loss: 0.0070 - val_mae: 0.0508 - val_mse: 0.0070 - val_rsq: 0.9126 Epoch 18/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0169 - mae: 0.0968 - mse: 0.0169 - rsq: 0.7807 - val_loss: 0.0092 - val_mae: 0.0673 - val_mse: 0.0092 - val_rsq: 0.8847 Epoch 19/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0167 - mae: 0.0959 - mse: 0.0167 - rsq: 0.7843 - val_loss: 0.0067 - val_mae: 0.0495 - val_mse: 0.0067 - val_rsq: 0.9155 Epoch 20/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0169 - mae: 0.0953 - mse: 0.0169 - rsq: 0.7864 - val_loss: 0.0066 - val_mae: 0.0505 - val_mse: 0.0066 - val_rsq: 0.9164 Epoch 21/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0164 - mae: 0.0944 - mse: 0.0164 - rsq: 0.7958 - val_loss: 0.0065 - val_mae: 0.0487 - val_mse: 0.0065 - val_rsq: 0.9185 Epoch 22/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0173 - mae: 0.0963 - mse: 0.0173 - rsq: 0.7887 - val_loss: 0.0064 - val_mae: 0.0497 - val_mse: 0.0064 - val_rsq: 0.9197 Epoch 23/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0158 - mae: 0.0935 - mse: 0.0158 - rsq: 0.7930 - val_loss: 0.0063 - val_mae: 0.0507 - val_mse: 0.0063 - val_rsq: 0.9203 Epoch 24/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 36ms/step - loss: 0.0150 - mae: 0.0926 - mse: 0.0150 - rsq: 0.8087 - val_loss: 0.0056 - val_mae: 0.0440 - val_mse: 0.0056 - val_rsq: 0.9297 Epoch 25/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0151 - mae: 0.0919 - mse: 0.0151 - rsq: 0.8020 - val_loss: 0.0057 - val_mae: 0.0452 - val_mse: 0.0057 - val_rsq: 0.9282 Epoch 26/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0153 - mae: 0.0909 - mse: 0.0153 - rsq: 0.8012 - val_loss: 0.0054 - val_mae: 0.0424 - val_mse: 0.0054 - val_rsq: 0.9324 Epoch 27/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0171 - mae: 0.0913 - mse: 0.0171 - rsq: 0.7825 - val_loss: 0.0056 - val_mae: 0.0461 - val_mse: 0.0056 - val_rsq: 0.9291 Epoch 28/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0148 - mae: 0.0891 - mse: 0.0148 - rsq: 0.8121 - val_loss: 0.0084 - val_mae: 0.0690 - val_mse: 0.0084 - val_rsq: 0.8945 Epoch 29/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0152 - mae: 0.0902 - mse: 0.0152 - rsq: 0.8177 - val_loss: 0.0057 - val_mae: 0.0487 - val_mse: 0.0057 - val_rsq: 0.9284 Epoch 30/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0142 - mae: 0.0892 - mse: 0.0142 - rsq: 0.8188 - val_loss: 0.0054 - val_mae: 0.0467 - val_mse: 0.0054 - val_rsq: 0.9321 Epoch 31/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0139 - mae: 0.0886 - mse: 0.0139 - rsq: 0.8321 - val_loss: 0.0047 - val_mae: 0.0407 - val_mse: 0.0047 - val_rsq: 0.9409 Epoch 32/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 36ms/step - loss: 0.0133 - mae: 0.0868 - mse: 0.0133 - rsq: 0.8310 - val_loss: 0.0045 - val_mae: 0.0397 - val_mse: 0.0045 - val_rsq: 0.9435 Epoch 33/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0138 - mae: 0.0869 - mse: 0.0138 - rsq: 0.8214 - val_loss: 0.0052 - val_mae: 0.0449 - val_mse: 0.0052 - val_rsq: 0.9352 Epoch 34/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0140 - mae: 0.0856 - mse: 0.0140 - rsq: 0.8197 - val_loss: 0.0047 - val_mae: 0.0414 - val_mse: 0.0047 - val_rsq: 0.9404 Epoch 35/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0150 - mae: 0.0859 - mse: 0.0150 - rsq: 0.7989 - val_loss: 0.0046 - val_mae: 0.0390 - val_mse: 0.0046 - val_rsq: 0.9425 Epoch 36/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0136 - mae: 0.0861 - mse: 0.0136 - rsq: 0.8291 - val_loss: 0.0052 - val_mae: 0.0453 - val_mse: 0.0052 - val_rsq: 0.9344 Epoch 37/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0141 - mae: 0.0872 - mse: 0.0141 - rsq: 0.8245 - val_loss: 0.0057 - val_mae: 0.0532 - val_mse: 0.0057 - val_rsq: 0.9280 Epoch 38/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0138 - mae: 0.0863 - mse: 0.0138 - rsq: 0.8309 - val_loss: 0.0047 - val_mae: 0.0418 - val_mse: 0.0047 - val_rsq: 0.9413 Epoch 39/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0131 - mae: 0.0856 - mse: 0.0131 - rsq: 0.8344 - val_loss: 0.0052 - val_mae: 0.0488 - val_mse: 0.0052 - val_rsq: 0.9344 Epoch 40/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0123 - mae: 0.0835 - mse: 0.0123 - rsq: 0.8416 - val_loss: 0.0040 - val_mae: 0.0377 - val_mse: 0.0040 - val_rsq: 0.9494 Epoch 41/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0123 - mae: 0.0828 - mse: 0.0123 - rsq: 0.8364 - val_loss: 0.0042 - val_mae: 0.0390 - val_mse: 0.0042 - val_rsq: 0.9469 Epoch 42/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0120 - mae: 0.0822 - mse: 0.0120 - rsq: 0.8379 - val_loss: 0.0044 - val_mae: 0.0401 - val_mse: 0.0044 - val_rsq: 0.9446 Epoch 43/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0126 - mae: 0.0813 - mse: 0.0126 - rsq: 0.8329 - val_loss: 0.0042 - val_mae: 0.0394 - val_mse: 0.0042 - val_rsq: 0.9477 Epoch 44/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0127 - mae: 0.0849 - mse: 0.0127 - rsq: 0.8375 - val_loss: 0.0047 - val_mae: 0.0458 - val_mse: 0.0047 - val_rsq: 0.9415 Epoch 45/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0123 - mae: 0.0829 - mse: 0.0123 - rsq: 0.8482 - val_loss: 0.0041 - val_mae: 0.0402 - val_mse: 0.0041 - val_rsq: 0.9484 Epoch 46/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0127 - mae: 0.0831 - mse: 0.0127 - rsq: 0.8387 - val_loss: 0.0041 - val_mae: 0.0414 - val_mse: 0.0041 - val_rsq: 0.9480 Epoch 47/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0121 - mae: 0.0817 - mse: 0.0121 - rsq: 0.8436 - val_loss: 0.0047 - val_mae: 0.0465 - val_mse: 0.0047 - val_rsq: 0.9405 Epoch 48/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0117 - mae: 0.0819 - mse: 0.0117 - rsq: 0.8480 - val_loss: 0.0051 - val_mae: 0.0508 - val_mse: 0.0051 - val_rsq: 0.9359 Epoch 49/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0127 - mae: 0.0821 - mse: 0.0127 - rsq: 0.8452 - val_loss: 0.0037 - val_mae: 0.0354 - val_mse: 0.0037 - val_rsq: 0.9539 Epoch 50/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 36ms/step - loss: 0.0139 - mae: 0.0821 - mse: 0.0139 - rsq: 0.8199 - val_loss: 0.0036 - val_mae: 0.0349 - val_mse: 0.0036 - val_rsq: 0.9547 Epoch 51/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 36ms/step - loss: 0.0113 - mae: 0.0807 - mse: 0.0113 - rsq: 0.8632 - val_loss: 0.0034 - val_mae: 0.0334 - val_mse: 0.0034 - val_rsq: 0.9566 Epoch 52/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0116 - mae: 0.0798 - mse: 0.0116 - rsq: 0.8522 - val_loss: 0.0036 - val_mae: 0.0352 - val_mse: 0.0036 - val_rsq: 0.9543 Epoch 53/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0115 - mae: 0.0808 - mse: 0.0115 - rsq: 0.8621 - val_loss: 0.0034 - val_mae: 0.0338 - val_mse: 0.0034 - val_rsq: 0.9567 Epoch 54/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0144 - mae: 0.0808 - mse: 0.0144 - rsq: 0.8201 - val_loss: 0.0036 - val_mae: 0.0342 - val_mse: 0.0036 - val_rsq: 0.9553 Epoch 55/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0121 - mae: 0.0789 - mse: 0.0121 - rsq: 0.8478 - val_loss: 0.0034 - val_mae: 0.0340 - val_mse: 0.0034 - val_rsq: 0.9566 Epoch 56/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 36ms/step - loss: 0.0116 - mae: 0.0790 - mse: 0.0116 - rsq: 0.8607 - val_loss: 0.0034 - val_mae: 0.0339 - val_mse: 0.0034 - val_rsq: 0.9570 Epoch 57/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0113 - mae: 0.0799 - mse: 0.0113 - rsq: 0.8525 - val_loss: 0.0039 - val_mae: 0.0380 - val_mse: 0.0039 - val_rsq: 0.9514 Epoch 58/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0117 - mae: 0.0798 - mse: 0.0117 - rsq: 0.8543 - val_loss: 0.0034 - val_mae: 0.0332 - val_mse: 0.0034 - val_rsq: 0.9575 Epoch 59/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0125 - mae: 0.0825 - mse: 0.0125 - rsq: 0.8491 - val_loss: 0.0044 - val_mae: 0.0450 - val_mse: 0.0044 - val_rsq: 0.9451 Epoch 60/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0104 - mae: 0.0764 - mse: 0.0104 - rsq: 0.8624 - val_loss: 0.0037 - val_mae: 0.0366 - val_mse: 0.0037 - val_rsq: 0.9534 Epoch 61/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 37ms/step - loss: 0.0113 - mae: 0.0776 - mse: 0.0113 - rsq: 0.8535 - val_loss: 0.0034 - val_mae: 0.0329 - val_mse: 0.0034 - val_rsq: 0.9578 Epoch 62/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0113 - mae: 0.0774 - mse: 0.0113 - rsq: 0.8519 - val_loss: 0.0052 - val_mae: 0.0522 - val_mse: 0.0052 - val_rsq: 0.9342 Epoch 63/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 36ms/step - loss: 0.0110 - mae: 0.0786 - mse: 0.0110 - rsq: 0.8620 - val_loss: 0.0033 - val_mae: 0.0335 - val_mse: 0.0033 - val_rsq: 0.9579 Epoch 64/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0111 - mae: 0.0791 - mse: 0.0111 - rsq: 0.8581 - val_loss: 0.0034 - val_mae: 0.0338 - val_mse: 0.0034 - val_rsq: 0.9577 Epoch 65/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0116 - mae: 0.0799 - mse: 0.0116 - rsq: 0.8651 - val_loss: 0.0032 - val_mae: 0.0318 - val_mse: 0.0032 - val_rsq: 0.9595 Epoch 66/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0132 - mae: 0.0790 - mse: 0.0132 - rsq: 0.8365 - val_loss: 0.0046 - val_mae: 0.0461 - val_mse: 0.0046 - val_rsq: 0.9423 Epoch 67/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0116 - mae: 0.0802 - mse: 0.0116 - rsq: 0.8559 - val_loss: 0.0034 - val_mae: 0.0332 - val_mse: 0.0034 - val_rsq: 0.9571 Epoch 68/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0107 - mae: 0.0766 - mse: 0.0107 - rsq: 0.8577 - val_loss: 0.0033 - val_mae: 0.0323 - val_mse: 0.0033 - val_rsq: 0.9584 Epoch 69/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0107 - mae: 0.0775 - mse: 0.0107 - rsq: 0.8635 - val_loss: 0.0035 - val_mae: 0.0358 - val_mse: 0.0035 - val_rsq: 0.9559 Epoch 70/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0105 - mae: 0.0762 - mse: 0.0105 - rsq: 0.8693 - val_loss: 0.0039 - val_mae: 0.0406 - val_mse: 0.0039 - val_rsq: 0.9505 Epoch 71/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0106 - mae: 0.0766 - mse: 0.0106 - rsq: 0.8634 - val_loss: 0.0031 - val_mae: 0.0316 - val_mse: 0.0031 - val_rsq: 0.9605 Epoch 72/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0113 - mae: 0.0782 - mse: 0.0113 - rsq: 0.8529 - val_loss: 0.0032 - val_mae: 0.0320 - val_mse: 0.0032 - val_rsq: 0.9597 Epoch 73/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0109 - mae: 0.0779 - mse: 0.0109 - rsq: 0.8620 - val_loss: 0.0031 - val_mae: 0.0311 - val_mse: 0.0031 - val_rsq: 0.9610 Epoch 74/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0111 - mae: 0.0769 - mse: 0.0111 - rsq: 0.8636 - val_loss: 0.0031 - val_mae: 0.0310 - val_mse: 0.0031 - val_rsq: 0.9607 Epoch 75/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0102 - mae: 0.0758 - mse: 0.0102 - rsq: 0.8729 - val_loss: 0.0032 - val_mae: 0.0322 - val_mse: 0.0032 - val_rsq: 0.9595 Epoch 76/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 36ms/step - loss: 0.0107 - mae: 0.0772 - mse: 0.0107 - rsq: 0.8609 - val_loss: 0.0031 - val_mae: 0.0311 - val_mse: 0.0031 - val_rsq: 0.9613 Epoch 77/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0110 - mae: 0.0774 - mse: 0.0110 - rsq: 0.8658 - val_loss: 0.0031 - val_mae: 0.0316 - val_mse: 0.0031 - val_rsq: 0.9608 Epoch 78/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0105 - mae: 0.0768 - mse: 0.0105 - rsq: 0.8665 - val_loss: 0.0033 - val_mae: 0.0339 - val_mse: 0.0033 - val_rsq: 0.9589 Epoch 79/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0104 - mae: 0.0767 - mse: 0.0104 - rsq: 0.8677 - val_loss: 0.0030 - val_mae: 0.0307 - val_mse: 0.0030 - val_rsq: 0.9617 Epoch 80/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0112 - mae: 0.0775 - mse: 0.0112 - rsq: 0.8662 - val_loss: 0.0031 - val_mae: 0.0307 - val_mse: 0.0031 - val_rsq: 0.9608 Epoch 81/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0105 - mae: 0.0752 - mse: 0.0105 - rsq: 0.8626 - val_loss: 0.0031 - val_mae: 0.0310 - val_mse: 0.0031 - val_rsq: 0.9605 Epoch 82/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0111 - mae: 0.0774 - mse: 0.0111 - rsq: 0.8591 - val_loss: 0.0031 - val_mae: 0.0311 - val_mse: 0.0031 - val_rsq: 0.9606 Epoch 83/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0107 - mae: 0.0766 - mse: 0.0107 - rsq: 0.8632 - val_loss: 0.0034 - val_mae: 0.0348 - val_mse: 0.0034 - val_rsq: 0.9570 Epoch 84/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0098 - mae: 0.0748 - mse: 0.0098 - rsq: 0.8718 - val_loss: 0.0030 - val_mae: 0.0304 - val_mse: 0.0030 - val_rsq: 0.9619 Epoch 85/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0107 - mae: 0.0774 - mse: 0.0107 - rsq: 0.8723 - val_loss: 0.0030 - val_mae: 0.0306 - val_mse: 0.0030 - val_rsq: 0.9618 Epoch 86/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0103 - mae: 0.0771 - mse: 0.0103 - rsq: 0.8624 - val_loss: 0.0030 - val_mae: 0.0305 - val_mse: 0.0030 - val_rsq: 0.9621 Epoch 87/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0107 - mae: 0.0774 - mse: 0.0107 - rsq: 0.8680 - val_loss: 0.0030 - val_mae: 0.0305 - val_mse: 0.0030 - val_rsq: 0.9620 Epoch 88/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0118 - mae: 0.0769 - mse: 0.0118 - rsq: 0.8507 - val_loss: 0.0031 - val_mae: 0.0307 - val_mse: 0.0031 - val_rsq: 0.9615 Epoch 89/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0108 - mae: 0.0756 - mse: 0.0108 - rsq: 0.8602 - val_loss: 0.0031 - val_mae: 0.0315 - val_mse: 0.0031 - val_rsq: 0.9607 Epoch 90/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0099 - mae: 0.0748 - mse: 0.0099 - rsq: 0.8767 - val_loss: 0.0030 - val_mae: 0.0304 - val_mse: 0.0030 - val_rsq: 0.9620 Epoch 91/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0108 - mae: 0.0751 - mse: 0.0108 - rsq: 0.8638 - val_loss: 0.0031 - val_mae: 0.0312 - val_mse: 0.0031 - val_rsq: 0.9614 Epoch 92/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0101 - mae: 0.0752 - mse: 0.0101 - rsq: 0.8703 - val_loss: 0.0031 - val_mae: 0.0307 - val_mse: 0.0031 - val_rsq: 0.9615 Epoch 93/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0099 - mae: 0.0752 - mse: 0.0099 - rsq: 0.8730 - val_loss: 0.0031 - val_mae: 0.0309 - val_mse: 0.0031 - val_rsq: 0.9614 Epoch 94/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0105 - mae: 0.0759 - mse: 0.0105 - rsq: 0.8643 - val_loss: 0.0031 - val_mae: 0.0317 - val_mse: 0.0031 - val_rsq: 0.9605 Epoch 95/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0107 - mae: 0.0755 - mse: 0.0107 - rsq: 0.8700 - val_loss: 0.0030 - val_mae: 0.0306 - val_mse: 0.0030 - val_rsq: 0.9618 Epoch 96/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0110 - mae: 0.0761 - mse: 0.0110 - rsq: 0.8575 - val_loss: 0.0030 - val_mae: 0.0306 - val_mse: 0.0030 - val_rsq: 0.9617 Epoch 97/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0100 - mae: 0.0753 - mse: 0.0100 - rsq: 0.8757 - val_loss: 0.0033 - val_mae: 0.0330 - val_mse: 0.0033 - val_rsq: 0.9590 Epoch 98/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0113 - mae: 0.0770 - mse: 0.0113 - rsq: 0.8597 - val_loss: 0.0032 - val_mae: 0.0326 - val_mse: 0.0032 - val_rsq: 0.9595 Epoch 99/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 35ms/step - loss: 0.0112 - mae: 0.0760 - mse: 0.0112 - rsq: 0.8565 - val_loss: 0.0031 - val_mae: 0.0310 - val_mse: 0.0031 - val_rsq: 0.9612 Epoch 100/100 50/50 ━━━━━━━━━━━━━━━━━━━━ 2s 34ms/step - loss: 0.0122 - mae: 0.0779 - mse: 0.0122 - rsq: 0.8481 - val_loss: 0.0030 - val_mae: 0.0305 - val_mse: 0.0030 - val_rsq: 0.9618 39/39 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0030 - mae: 0.0307 - mse: 0.0030 - rsq: 0.9639
INFO [VAL SET] LOSS=0.0030, MAE=0.0305, MSE=0.0030, RSQ=0.9621 3247659150.py:114
Finally we will evaluate the model¶
task.evaluate(params)
39/39 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0030 - mae: 0.0307 - mse: 0.0030 - rsq: 0.9639
INFO [TEST SET] LOSS=0.30%, MAE=3.05%, MSE=0.30%, RSQ=96.21% 3848634147.py:29
312/312 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step