-
Notifications
You must be signed in to change notification settings - Fork 0
/
rss.xml
executable file
·7 lines (7 loc) · 5.97 KB
/
rss.xml
1
2
3
4
5
6
7
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Eng Yasin's Blog :)</title><link>https://engyasin.github.io/</link><description>Posts and Articles about AI techs and computer science</description><atom:link href="https://engyasin.github.io/rss.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2024 <a href="mailto:yy33@tu-clausthal.de">Yasin Yousif</a> </copyright><lastBuildDate>Sun, 19 May 2024 11:48:00 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><link>https://engyasin.github.io/posts/why-the-new-kolmogorov-arnold-networks-so-promising/</link><dc:creator>Yasin Yousif (llama3-comments)</dc:creator><description><div><p><em>Recently, (yet) another new neural network structure was proposed. Namely, Kolmogorov-Arnold Network (KAN). Soon this new structure attracted a lot of attention, and for good reason: interpretability. For what current Multi Layers Preceptron (MLPs) networks lack is a way to make sense of the network predictions. Magic isn't involved; we need to know how the learning is done, so we can improve, fix, or extend it in an efficient manner. KANs take a significant step forward in this regard using addition operators, which have been proven to represent higher-order functions effectively.</em></p>
<p><a href="https://engyasin.github.io/posts/why-the-new-kolmogorov-arnold-networks-so-promising/">Read more…</a> (6 min remaining to read)</p></div></description><category>additve-models</category><category>ai</category><category>deep-learning</category><category>interpretability</category><guid>https://engyasin.github.io/posts/why-the-new-kolmogorov-arnold-networks-so-promising/</guid><pubDate>Sun, 19 May 2024 07:25:52 GMT</pubDate></item><item><link>https://engyasin.github.io/posts/tracking-my-working-times-for-804-days/</link><dc:creator>Yasin Yousif</dc:creator><description><div><p><em>As a student or knowledge worker, time management is essential for achieving success. However, organizing one's schedule can be challenging, for instance one is faced with the problem of distributing work and rest times in optimal time windows. To address this issue, analyzing previous working schedules of an individual may provide useful recommendations for him.</em></p>
<p><a href="https://engyasin.github.io/posts/tracking-my-working-times-for-804-days/">Read more…</a> (7 min remaining to read)</p></div></description><category>GAM</category><category>productivity</category><category>tips</category><guid>https://engyasin.github.io/posts/tracking-my-working-times-for-804-days/</guid><pubDate>Sun, 14 Apr 2024 11:25:04 GMT</pubDate></item><item><link>https://engyasin.github.io/posts/why-deep-learning-sucks/</link><dc:creator>Yasin Yousif</dc:creator><description><div><p><em>After spending some years studying and using deep learning, I always suffered from the difficulty of debugging errors, or setting hyperparameters. As a researcher this can not only waste additional time, but also money and resources. In this article, we will demonstrate how traditional rule-based methods have a hidden edge (beside simplicity) in solving complex problems that require automation.</em></p>
<p><a href="https://engyasin.github.io/posts/why-deep-learning-sucks/">Read more…</a> (3 min remaining to read)</p></div></description><category>ai</category><category>deep-learning</category><category>opinion</category><guid>https://engyasin.github.io/posts/why-deep-learning-sucks/</guid><pubDate>Tue, 09 Jan 2024 11:49:14 GMT</pubDate></item><item><title>The unexpected winter of Artifical Intelligence</title><link>https://engyasin.github.io/posts/the-unexpected-winter-of-artifical-intelligence/</link><dc:creator>Yasin Yousif</dc:creator><description><div><p><em>Nowadays, everyone is exicted the latest trends in AI applications, like ChatGPT, self-driving cars, Image synthesis, etc. This overhype is not new, it happened before in the first AI winter in the 1980s. Some warn againt it, because it may cause dispointment and even a new AI winter. But here I will talk about the bottelneck of AI research that I come across in my work. It may be not be called a winter, but it's difinitely will cause a slow down in the field.</em></p>
<p><a href="https://engyasin.github.io/posts/the-unexpected-winter-of-artifical-intelligence/">Read more…</a> (3 min remaining to read)</p></div></description><category>ai</category><category>deep-learning</category><category>opinion</category><guid>https://engyasin.github.io/posts/the-unexpected-winter-of-artifical-intelligence/</guid><pubDate>Sun, 05 Mar 2023 06:13:36 GMT</pubDate></item><item><link>https://engyasin.github.io/posts/train-your-deep-neural-network-faster-with-automatic-mixed-precision/</link><dc:creator>Yasin Yousif</dc:creator><description><div><p><em>Have you been working on deep learning model with big size and wandered how to squeeze every possibility to save your time? or maybe you have the best GPU hardware but still find the speed too slow. Well, look at the bright side. This means you still have room for improvment :)</em></p>
<p><a href="https://engyasin.github.io/posts/train-your-deep-neural-network-faster-with-automatic-mixed-precision/">Read more…</a> (2 min remaining to read)</p></div></description><category>deep-learning</category><category>pytorch</category><category>tips</category><guid>https://engyasin.github.io/posts/train-your-deep-neural-network-faster-with-automatic-mixed-precision/</guid><pubDate>Fri, 23 Sep 2022 14:53:19 GMT</pubDate></item></channel></rss>