Skip to content
View mc-o's full-sized avatar
  • University College London
  • London

Highlights

  • Pro
Block or Report

Block or report mc-o

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
mc-o/README.md

Welcome to Can's GitHub Home

๐Ÿ‘‹ Hi there! I'm Can OZKAN, a PhD student at University College London (UCL) and a guest lecturer at the London School of Economics (LSE). My research revolves around dynamic network representations, delving into the intricate interplay within road networks and telecommunication networks. I'm particularly intrigued by understanding the repercussions of disruptions such as road segment closures or collapses in telecom links on the entire network.

Research Focus ๐Ÿง 

My primary focus lies at the intersection of graph neural networks and reinforcement learning, leveraging these powerful tools to model and analyze dynamic networks. I'm passionate about uncovering the hidden patterns and dependencies in complex systems, seeking innovative solutions to address real-world challenges in network modeling.

Academic Journey ๐Ÿ“š

  • Ph.D. at UCL: Exploring dynamic network representations and their implications on network resilience.
  • Guest Lecturer at LSE: Having fun with teaching data science and machine learning

Technical Toolbox ๐Ÿ› ๏ธ

  • Graph Neural Networks (GNNs): Harnessing the power of GNNs to capture complex relationships in dynamic networks.
  • Reinforcement Learning: Applying RL techniques to optimize network behavior in response to disruptions.
  • Numpy, PyTorch, PyG: My go-to tools for efficient numerical computation, deep learning, and graph-related tasks.
  • Stable Baseline: Employing stable baseline methods for reinforcement learning applications.

Previous Endeavors ๐ŸŽ“

  • Master's at Imperial College London: Explored short-term traffic prediction using Kalman filter, PCA, and ICA techniques.

Connect with Me ๐ŸŒ

Feel free to reach out if you're interested in collaborative research, have questions about my work, or just want to chat about the exciting world of dynamic networks!

Popular repositories

  1. hello-world hello-world Public

    This is the first repo

  2. academic-kickstart academic-kickstart Public

    Forked from HugoBlox/theme-academic-cv

    ๐Ÿ“ Easily create a beautiful website using Academic, Hugo, and Netlify

    Shell

  3. reinforcement-learning reinforcement-learning Public

    Forked from dennybritz/reinforcement-learning

    Implementation of Reinforcement Learning Algorithms. Python, OpenAI Gym, Tensorflow. Exercises and Solutions to accompany Sutton's Book and David Silver's course.

    Jupyter Notebook

  4. stable-baselines3 stable-baselines3 Public

    Forked from DLR-RM/stable-baselines3

    PyTorch version of Stable Baselines, improved implementations of reinforcement learning algorithms.

    Python

  5. rl-tutorial-jnrr19 rl-tutorial-jnrr19 Public

    Forked from araffin/rl-tutorial-jnrr19

    Stable-Baselines tutorial for Journรฉes Nationales de la Recherche en Robotique 2019

    Jupyter Notebook

  6. gym gym Public

    Forked from openai/gym

    A toolkit for developing and comparing reinforcement learning algorithms.

    Python