Skip to content
208 changes: 197 additions & 11 deletions docs/Design/SoftDetailedDes/MIS.tex
Original file line number Diff line number Diff line change
Expand Up @@ -1318,6 +1318,11 @@ \subsubsection{Local Functions}

\newpage

\bibliographystyle {plainnat}
\bibliography {../../../refs/References}

\newpage

\section{MIS of Plugin Initiator}

\subsection{Module}
Expand Down Expand Up @@ -1711,21 +1716,202 @@ \section{Appendix --- Reflection}

\input{../../Reflection.tex}

\subsubsection*{Group Reflection}

\begin{enumerate}
\item What went well while writing this deliverable?
\item What pain points did you experience during this deliverable, and how
did you resolve them?
\item Which of your design decisions stemmed from speaking to your client(s)
\item \textit{Which of your design decisions stemmed from speaking to your client(s)
or a proxy (e.g. your peers, stakeholders, potential users)? For those that
were not, why, and where did they come from?
\item While creating the design doc, what parts of your other documents (e.g.
requirements, hazard analysis, etc), it any, needed to be changed, and why?
\item What are the limitations of your solution? Put another way, given
unlimited resources, what could you do to make the project better? (LO\_ProbSolutions)
\item Give a brief overview of other design solutions you considered. What
were not, why, and where did they come from?}

The decision to modularize the refactorers into specific "smell-focused"
components was largely inspired by a conversation with our supervisor,
who is also our primary stakeholder. During one of our discussions, our
supervisor suggested that the problem at hand had the potential to
evolve into a graduate-level reinforcement learning project. This
idea of managing multiple refactoring strategies and selecting the
best one based on certain conditions led to the insight that
organizing the refactorers by the specific types of code smells
they address would make the system more extensible. By focusing
each component on a particular code smell, we could later build
upon the design and possibly incorporate machine learning or
reinforcement learning strategies to optimize refactorer selection.
This modular approach would allow for easier integration of additional
strategies in the future, making the tool scalable as the project evolves.


Another important design decision influenced by our supervisor was the
idea to validate the refactored code using a test suite. Our supervisor
emphasized that in a real-world application, validating the integrity
of the refactored code with a comprehensive test suite was a crucial step.

Both of these design decisions were informed by valuable input from our
supervisor, ensuring that the project stayed grounded in real-world
applicability and allowed for future enhancements and improvements.


\item \textit{While creating the design doc, what parts of your other documents (e.g.
requirements, hazard analysis, etc), it any, needed to be changed, and why?}

While creating the design document, several components of the project were revised to improve clarity and focus. Specifically, the list of code smells targeted by the refactoring library was refined by adding new smells that align more closely with our sustainability goals and removing others deemed less impactful. This required updates to the requirements document to ensure it accurately reflected the new scope of supported refactorings. Additionally, the decision was made to remove the metric reporting functionality due to its complexity and limited time, which led to corresponding modifications in both the requirements document and the VnV plan, where this feature had previously been considered for validation. Moreover, the reinforcement learning model, initially intended to optimise refactoring decisions, was excluded from the project due to time constraints and implementation challenges. This necessitated updates to the hazard analysis document to remove risks associated with this component and to better align the analysis with the reduced project scope. These changes ensure consistency and maintain a realistic and achievable project timeline.

\item \textit {What are the limitations of your solution? Put another way, given unlimited resources, what could you do to make the project better? (LO\_ProbSolutions)}

The energy measurement library we selected, Codecarbon, proved to be less reliable
than anticipated, which affects the accuracy of some of our results. Ideally,
we would replace it with a more dependable resource. However, due to time
constraints and the inherent complexity of measuring CO2 emissions from code,
this isn’t feasible within the scope of this project. For now, we are assuming
Codecarbon’s reliability. In a real-world implementation, we would prioritize
using a more robust energy measurement system.

\item \textit {Give a brief overview of other design solutions you considered. What
are the benefits and tradeoffs of those other designs compared with the chosen
design? From all the potential options, why did you select the documented design?
(LO\_Explores)
(LO\_Explores)}

We considered incorporating a machine learning aspect into the project,
specifically using reinforcement learning (RL) to manage the refactoring
process. The idea was to treat the selection and application of
refactoring strategies as a decision-making process, where an agent
could learn the best strategies over time based on rewards and outcomes.

In this approach, the agent would represent the system that applies
different refactoring techniques to the code. The environment would
be the code itself, with various code smells and inefficiencies that
the agent needs to address. The actions the agent would take would
involve selecting and applying one of the predefined refactoring
strategies (like long lambda function or long parameter list). The reward
would be the resulting decrease in energy consumption (i.e., reduction
in CO2 emissions), measured after the code is refactored and executed.
The agent would receive a positive reward for actions that successfully
lead to more energy-efficient code and a negative reward for actions
that increase energy consumption. Over time, the agent would learn to
prioritize and apply the most effective refactoring techniques based
on the rewards it receives.


While this machine learning solution seemed promising, there were a
few trade-offs to consider. First, implementing reinforcement
learning would significantly increase the complexity of the project.
It would require training data, fine-tuning the agent's learning parameters,
and ensuring that the agent's actions actually lead to measurable
improvements in CO2 efficiency. Additionally, RL would require
ongoing iteration to improve its performance, which could be time-consuming
and resource-intensive, especially given the limited time available
for the project.


Another concern was that reinforcement learning, while powerful,
might not always be the most effective or efficient solution for
this kind of task. The selection of refactoring strategies is not
necessarily a highly complex decision-making process that requires
learning over time. Since we already have a set of predefined
strategies, a more direct, rule-based approach was more appropriate.
We could achieve the same results without the need for training the
agent or dealing with the unpredictability of machine learning models.


Given these trade-offs, we opted to stick with the more straightforward
approach of selecting and applying refactoring strategies based on
predefined rules. This decision was driven by the need for a practical
and efficient solution within the given project constraints. While
reinforcement learning could be an interesting exploration for future
versions of the tool, the current design provides a reliable and
manageable way to achieve the desired results without adding
unnecessary complexity.


\end{enumerate}

\subsubsection*{Mya Hussain}

\begin{enumerate}
\item \textit{What went well while writing this deliverable? }

Writing the deliverable helped to clearly decompose the system into manageable modules.
This ensured no functionality was missed in the implementation process and that all
components connected in a way that made sense.

\item \textit{What pain points did you experience during this deliverable, and how did you resolve them?}

It was strange that we had already coded the project before completing this deliverable.
It acted as more of a sanity check that our design decisions made sense rather than
an actual blueprint of what to do. This made this deliverable easier to write as
the code was already present but also made the work feel unnecessarily redundant i.e boring to do.
It often felt like I was documenting things that were already clear or implemented.
This repetition made the process less engaging and, at times, a bit tedious.
To resolve this, I focused on framing the document as an opportunity to validate
and formalize our design decisions, which helped shift the mindset from simply
checking off tasks to reaffirming the thought process behind our choices.

\end{enumerate}

\subsubsection*{Sevhena Walker}

\begin{enumerate}
\item \textit{What went well while writing this deliverable? }

Our team already had a pretty solid idea of how we wanted to break up our system, as well as the key components that should be involved, even before we started working on the MG and MIS documents. We had already coded a decent portion of the system and, in doing so, had explored and tested various design approaches and options. This hands-on experience gave us a strong foundation and a practical understanding of what worked and what didn’t, which significantly influenced our final design choices. For example, we had already determined that the refactorers would be structured as individual classes inheriting from a common base class, which simplified documenting shared functionality in the MIS.

\item \textit{What pain points did you experience during this deliverable, and how did you resolve them?}

One of the biggest pain points was turning our informal design ideas and code into well-defined, modular components with clear inputs, outputs, and semantics. We had to carefully review the existing code to make sure the documentation matched its behaviour while keeping things flexible for future changes. We also ran into some inconsistencies that required minor refactoring to clean up our interfaces. Another tricky part was finding the right balance between providing enough detail and keeping the documentation readable without going too deep into implementation. We tackled these problems by reviewing everything multiple times, getting feedback, and simplifying where we could to make things clearer.

\end{enumerate}

\subsection*{Nivetha Kuruparan}

\begin{enumerate}
\item \textit{What went well while writing this deliverable? }
Planning out the different modules early on was incredibly helpful for me. It allowed me to clearly identify how various parts of the system interact and what functionality could be combined or separated. This structured approach not only helped in designing the system but also made it easier to focus on what each module should accomplish, ensuring no major functionality was overlooked.

\item \textit{What pain points did you experience during this deliverable, and how did you resolve them?}
It was challenging for me to think through each module thoroughly and ensure that every input, output, and state variable was captured accurately. This required going through the implementation multiple times and considering edge cases that might not have been obvious at first. Breaking the process into smaller, more manageable tasks and carefully reviewing each module helped resolve this challenge.


\end{enumerate}

\subsection*{Ayushi Amin}

\begin{enumerate}
\item \textit{What went well while writing this deliverable? }
Honestly, once I got into it, things flowed pretty smoothly. Breaking everything down into
smaller sections helped a ton. It made the whole thing feel less intimidating. I also felt like
I had a good understanding of how the modules all connected, which made it easier to explain things.
We all had our own parts to work on based on the modules we have and were going to create so it was easier to
work on something I was familiar with. Also, talking it through with my teammates about some of the trickier
parts really helped me feel more confident about what I was writing. We all did code reviews and helped eachother out on
parts we didn't quite get or thought we got. Overall, it felt pretty satisfying to see it all come together.

\item \textit{What pain points did you experience during this deliverable, and how did you resolve them?}
I think the hardest part of this was visualizing extra dependencies and functions I would need to create to make my
module work. We had coded out a portion of it but it did not include everything. I had to make sure I was not missing
anything important. It felt like I was stuck in this loop of overthinking every little detail. To get past it, I took a
break and came back with a fresh perspective, which helped a bit. I also hit up one of my teammates to talk through the
parts I was struggling with. They gave me some ideas and helped me confirm I was on the right track since some of the
modules I did were similar to theirs so we were able to collaborate easily. After that, things did not feel as stressful,
and I was able to wrap it up.

\end{enumerate}

\subsection*{Tanveer Brar}

\begin{enumerate}
\item \textit{What went well while writing this deliverable? }
The best part about writing this deliverable was getting the chance to design the user interface before having implemented it. The Source Code Optimizer has already been designed
and implemented as a result of the POC assignment in November. We had not implemented the VS Code Plugin for it yet, so getting the chance to actually think about its
design was very rewarding(especially since most academic projects I have done before either involved no design component or very minimal for a small program). Each modules has clear
responsibilities, which helped me anticipate all needed requirements for this plugin through a logical framework(POC implementation was a lot of trial and error).
The other good thing were the built in labels for anticipated changes and modules, which helped me easily write down the traceability matrix.

\item \textit{What pain points did you experience during this deliverable, and how did you resolve them?}
One of the biggest challenges that I faced was identifying the correct module for each anticipated change in the traceability matrix. My team mate had worked on the anticipated
changes, Some of these changes had overlapping responsibilities across modules, so I carefully reviewd the module responsibilities over again to be able to point out the modules for
each change. It needed a lot of cross referencing the module guide and anticipated changes to make sure nothing was missde.
Also, when determining module dependencies in the "Uses" section for each module's decomposition, I was not fully sure about which modules should depend on which for the VS Code Plugin.
This is because there can be multiple possible ways, for example the Plugin Initializer or Smell Detector being able to directly call Source Code Optimizer. While resolving this, I realized that
while there is no one perfect mapping of dependencies, the goal should be to be as modular as possible and apply the seperation of concerns principle. This is why, for example, the Backend Communicator
is the only module in the design that communicates with Source Code Optimizer.
\end{enumerate}


Expand Down