FS-TFP/materials/notebook/03_personalized_FL.ipynb

373 lines
12 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
},
"tags": []
},
"source": [
"FederatedScope is a flexible FL framework, which enables users to implement complex FL algorithms simply and intuitively. In this tutorial, we will show how to implement diverse personalized FL algorithms.\n",
"\n",
"## Background\n",
"\n",
"In an FL course, multiple clients aim to cooperatively learn models without directly sharing their private data. As a result, these clients can be arbitrarily different in terms of their underlying **data distribution** and **system resources** such as computational power and communication width.\n",
"\n",
"- On one hand, the data quantity skew, feature distribution skew, label distribution skew, and temporal skew are pervasive in real-world applications as different users generate the data with different usage manners.\n",
" Simply applying the shared global model for all participants might lead to sub-optimal performance.\n",
"- On the other hand, the participation degrees of different FL participants can be diverse due to their different hardware capabilities and network conditions.\n",
"\n",
"It is challenging to make full use of local data considering such systematical heterogeneity. As a natural and effective approach to address these challenges, personalization gains increasing attention in recent years. Personalized FL (pFL) raises strong demand for various customized FL implementation, e.g., the personalization may exist in\n",
"\n",
"- Model objects, optimizers and hyper-parameters\n",
"- Model sub-modules\n",
"- Client-end behaviors such as regularization and multi-model interaction\n",
"- Server-end behaviors such as model interpolation\n",
"\n",
"We have implemented several implementations for state-of-the-art (SOTA) pFL methods to meet the above requirements. We will demonstrate the FedBN in this playground. More examples can be found in our [tutorial](https://federatedscope.io/docs/pfl/), in which we show how powerful and flexible the FederatedScope framework to implement pFL extensions.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
},
"tags": []
},
"source": [
"## The necessity of personalization: A multi-task example\n",
"\n",
"### Setting\n",
"Federated learning can be well modeled by multi-task learning, that is, viewing each client's learning as a subtask. A common multi-task learning scenario is that individual subtasks use different and non-overlapping features and labels. \n",
"Although the features and labels are superficially different, the ability of sub-task models can be migrated and enhanced by the federated process.\n",
"\n",
"Note that when the features and labels have different shape, it is necessary to personalize the input and output layers for the aggregation compatibility.\n",
"Here we use a multi-task graph dataset as an example since the dataset has a relatively small data size and we can check the performance quickly. \n",
"For more larger datasets, please refer examples in `scripts/personalization_exp_scripts`.\n",
"\n",
"### Configuration\n",
"**FederatedScope** organizes the configuration through an extension of `yacs.config.cfgNode`. Please refer to our [official documentation](https://federatedscope.io/docs/own-case/) for specific instructions on how to configure the initial `global_cfg` and customize your own configuration. By default, we provide built-in personalization-related configurations in `federatedscope/core/configs/`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2022-03-31T09:23:17.731683Z",
"start_time": "2022-03-31T09:23:17.710894Z"
},
"pycharm": {
"name": "#%%\n"
},
"tags": []
},
"outputs": [],
"source": [
"from federatedscope.core.configs.config import global_cfg\n",
"import torch\n",
"\n",
"pfl_cfg = global_cfg.clone()\n",
"pfl_cfg.merge_from_file(\"FederatedScope/federatedscope/gfl/baseline/fedavg_gnn_minibatch_on_multi_task.yaml\")\n",
"pfl_cfg.use_gpu = torch.cuda.is_available()\n",
"\n",
"print(pfl_cfg.personalization)"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"As you can see, we make the input and output layer as local personalized parameters by simply configuration."
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"### 1. Prepare dataset and model\n",
"Now, let's load the graph multi-task dataset and check the GIN model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2022-03-31T09:23:35.205319Z",
"start_time": "2022-03-31T09:23:18.883208Z"
},
"jupyter": {
"outputs_hidden": true
},
"pycharm": {
"name": "#%%\n"
},
"tags": []
},
"outputs": [],
"source": [
"from federatedscope.core.auxiliaries.data_builder import get_data\n",
"\n",
"data, modified_cfg = get_data(pfl_cfg.clone())\n",
"pfl_cfg.merge_from_other_cfg(modified_cfg)\n",
"print(f\"=====The data config is =====\\n{pfl_cfg.data}\\n\")\n",
"print(f\"=====The data meta-info is =====\\n{data}\\n\")\n",
"print(f\"=====The train data at client 1 is =====\\n{data[1]['train'].dataset}\\n\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2022-03-31T08:56:22.102985Z",
"start_time": "2022-03-31T08:56:22.100304Z"
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"print(f\"=====The model config is =====\\n{pfl_cfg.model}\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
},
"tags": []
},
"source": [
"## 2. Check the performance w/o FedBN\n",
"Now, let's check the FL performance from FedAvg. We first check several task-specific configuration speficied by our yaml:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2022-03-31T08:56:22.108379Z",
"start_time": "2022-03-31T08:56:22.104300Z"
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"print(f\"=====Evaluation related configs=====\\n{pfl_cfg.eval}\\n\")\n",
"print(f\"=====Federated setting related configs=====\\n{pfl_cfg.federate}\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"jp-MarkdownHeadingCollapsed": true,
"pycharm": {
"name": "#%% md\n"
},
"tags": []
},
"source": [
"Then we can see that we only maintain local paramters of pre-layer and last-layer to handle multi-task case:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"tags": []
},
"outputs": [],
"source": [
"print(pfl_cfg.personalization) "
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"Now let's run our API to see the performance of FedAvg! This will take a few minutes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"tags": []
},
"outputs": [],
"source": [
"from federatedscope.core.fed_runner import FedRunner\n",
"from federatedscope.core.auxiliaries.worker_builder import get_server_cls, get_client_cls\n",
"from federatedscope.core.auxiliaries.utils import setup_seed\n",
"from federatedscope.core.auxiliaries.logging import update_logger\n",
"update_logger(pfl_cfg)\n",
"setup_seed(pfl_cfg.seed)\n",
"\n",
"Fed_runner = FedRunner(data=data,\n",
" server_class=get_server_cls(pfl_cfg),\n",
" client_class=get_client_cls(pfl_cfg),\n",
" config=pfl_cfg.clone())\n",
"Fed_runner.run()"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"We finally log the best results based on the validation datasets. \n",
"You can see that the FedAvg gains 0.615 acc when using weighted summarizaion manner. Besides, we find that the std of test accucary is 0.140, and the top and bottom acc deciles are 0.86 and 0.465 respectively."
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## 2. Check the performance w/ FedBN\n",
"Now, lets use the FedBN and see the performance change!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2022-03-31T08:56:22.113566Z",
"start_time": "2022-03-31T08:56:22.109771Z"
},
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"print(f\"=====[Before] Federated personalization related configs=====\\n{pfl_cfg.personalization}\\n\")\n",
"\n",
"pfl_cfg.merge_from_file(\"FederatedScope/scripts/personalization_exp_scripts/fedbn/fedbn_gnn_minibatch_on_multi_task.yaml\")\n",
"pfl_cfg.use_gpu = torch.cuda.is_available()\n",
"\n",
"print(f\"=====[After] Federated personalization related configs=====\\n{pfl_cfg.personalization}\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"The modification here is to filter the `norms` parameters. Let's run the API to check the performance of FedBN!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"ExecuteTime": {
"end_time": "2022-03-31T09:03:14.923069Z",
"start_time": "2022-03-31T09:03:13.933170Z"
},
"pycharm": {
"name": "#%%\n"
},
"tags": []
},
"outputs": [],
"source": [
"from federatedscope.core.fed_runner import FedRunner\n",
"from federatedscope.core.auxiliaries.worker_builder import get_server_cls, get_client_cls\n",
"\n",
"\n",
"Fed_runner = FedRunner(data=data,\n",
" server_class=get_server_cls(pfl_cfg),\n",
" client_class=get_client_cls(pfl_cfg),\n",
" config=pfl_cfg.clone())\n",
"Fed_runner.run()"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"We can find that simple personalized method achieves 0.741 acc when using weighted summarizaion manner. Besides, the std of test accucary is 0.138, and the top and bottom acc deciles are 0.927 and 0.514 respectively. \n",
"\n",
"Compared with FedAvg, we gain significant accuracy improvements (12.6%)! The fairness-related metrics are also improved: the standard deviation, the top and bottom deciles are improved with a ratio 0.2%, 6.7%, and 4.9% respectively.\n",
"\n",
"Try other pFL algorithms from our [tutorial](https://federatedscope.io/docs/pfl/), and welcome to [contribute](https://federatedscope.io/docs/contributor/) more pFL algorithms!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}