Proxies and brokers. Don't replace everything at once. Add new code with same old functionality. Replace old code piece by piece like Ship of Theseus. It will have less domino effect of bugs.
-
Avoid rewriting a legacy system from scratch, by strangling it
-
A journey to searching Have I Been Pwned database in 49μs (C++)
Searching using optimal algorithm and data structure. It's weird how in normal cases we don't even consider worst case scenario and just let the algorithm be. But data structures and search algorithms do matter in performance critical domains.
This makes sense for me at the moment. Having been working as a computer vision engineer
coughs(over-estimation of the post), PDFs are pain in the ass....
The shell can be one of the powerful tools you can use if you are to be comfortable with CLI.
Partly because it's obscured and non-deterministic. "Non-deterministic" in a sense, a simple change can have unforeseen consequences and inherent bugs. Also the goals keep on changing constantly. Problems with scaling, keeping with temporal values and such.
Hacker News Thread: https://news.ycombinator.com/item?id=22523814
This is an interesting issue in open-source. The original creator of Guake was not listed on the later-contributed-by-community continuation of the project.
It also showcases the healthy discussion between the intellectuals.
List of relevant resources for data engineering.
[Almost certainty] If other ML models already solved the problem.
[Very high probability] If a similar problem has already been solved by an ML algorithm, and the differences between that and your problem don’t seem significant.
[High probability] If the inputs & outputs are small enough to be comparable in size to those of other working ML models AND if we know a human can solve the problem with little context besides the inputs and outputs.
[Reasonable probability] If the inputs & outputs are small enough to be comparable in size to those of other working ML models AND we have a high certainty about the deterministic nature of the problem (that is to say, about the inputs being sufficient to infer the outputs).
We seem to cling to patterns we are familiar with. That applies to software development too. Patterns in programming...
I love how Ben interprets the underlying complex phenomena into simple and understandable manner. This specific video demonstrates the pros and cons of building an entire computer on a breadboard. At higher signal frequency, the noises kick in due to more inductance and capacitance on the board which in low frequency signal otherwise might not be observed.
"But, a company can send out its take-home interview question indiscriminately and waste candidates' time. (This is why I walk away from time-consuming take-homes, or take-homes that require an alphabet soup of technologies.)"
"It's much harder to tell when a take-home question is bad. Some candidates might want to impress you and spend an unreasonable time on the question. Other candidates might ghost. (Ghosting means the candidate just stops responding to emails and phone calls.) The feedback loop just isn't there."
"As a candidate, when I get a bad whiteboarding question, it's easy for me to just muddle through it. Depending on circumstances, I'll either decline the job or discuss the odd question with someone else in the process. But, when I get a bad take-home question, I'm stuck between either ghosting, (not responding at all,) or providing feedback. I've done both, and I'm not sure which is a better approach. The one case I gave feedback, the hiring manager got defensive, which confirmed my decision to walk away."
Interesting. Tests have their own set of planes in themselves.
I feel this is a great tool. Generating dependency graph has always been a bummer for me since I don't use any type of IDE.
pydeps generates the graph at module levels. It's kinda good to see it.