I am currently working as an actuarial consultant in the exciting areas of Non-Life Insurance and Pensions and I am a Chartered Actuary (Fellow) with the Institute and Faculty of Actuaries. I hold a MSc in Artificial Intelligence from the University of Edinburgh and a BSc in Actuarial Science from Heriot-Watt University in Edinburgh.
My interests extend beyond the traditional actuarial areas to topics such as ๐ค Machine Learning, ๐ Data Science, โ๏ธ AI Ethics and ๐พ Big Data Analytics.
This project constituted my MSc thesis and involved the implementation and empirical evaluation of different non-conventional value function approximation methods in the Reinforcement Learning context. Implemented methods include Decision Tree, Support Vector Regression, k-Nearest Neighbours and Gaussian Processes -based methods.
The evaluation of each method, and their comparison with the conventional approaches, has revealed important insights on the respective strengths and weaknesses of each model. Given the importance of the results, I decided to continue working on the project, under the guidance of the Autonomous Agents Research Group at the University of Edinburgh. My code repository and experiments are now maintained on their site:
Repository: https://github.com/uoe-agents/non_conventional_value_function_approximation
Thesis: https://agents-lab.org/blog/master-dissertations/atsiakkas_msc2021.pdf
This project performs data cleansing and pre-processing of cough audio recordings and implements a convolutional neural network model for their classification on whether the individual on each audio recording had COVID-19 or not. The CNN is based on the ResNet-50 architecture and data pre-processing steps include filtering, down-sampling, audio segmentation, standardisation, and conversion to spectrograms. The crowd-sourced Coughvid and Coswara datasets were used for training and evaluating the model.
Repository: https://github.com/atsiakkas/covid19_cough_classification
This project creates an audio synthesizer model able to receive a given phrase from the user and produce and play back to the user a synthesized audio pronunciation of that phrase. The implementated model splits the given phrase into diphones and pauses and uses a set of pre-recorded audio files of all english diphones to synthesize that phrase.
Repository: https://github.com/atsiakkas/audio_synthesizer