- Please return your answers to Greenhouse.
- Answer with bullet points:
- Single sentence per bullet point
- Max 4 per question
- Return in a file named yourname_quiz
User object API doc
GET /users
POST /users/new
POST /users/:id/update
POST /users/:id/rename
POST /users/:id/update-timezone
DELETE /users/delete?id=:id
Here are some examples of the behavior:
POST /users/new HTTP/1.1
{
"name": "Cthulhu"
}
HTTP/1.1 200
"Error: Username already exists"
- Redundant
/new
suffix inPOST /users/new
handler as POST method already assumes creating resource. - Redundant
/update
,/rename
&update-timezone
suffixes inPOST /users/:id/update
,POST /users/:id/rename
&POST /users/:id/update-timezone
, yet it's better to combine all those routes into single one. - Instead using "POST" method for
/update
and/rename
, it's better to use "PUT" method (as by HTTP specs, POST is about creating resource whilst PUT is used to update resource available on the server). - Duplicated action in original URI
DELETE /users/delete?id=:id
, so having intent already specified in method we may use more concise versionDELETE /users/:id
Both of the functions return the same result for a two dimensional integer array (1024x1024). Which one would you prefer? List pros and/or cons of both functions.
#define SIZE 1024
int sumA(int a[SIZE][SIZE])
{
int sum = 0;
for(int y = 0; y < SIZE; y++)
for(int x = 0; x < SIZE; x++)
sum += a[x][y];
return sum;
}
int sumB(int *a)
{
int sum[4] = {0, 0, 0, 0};
for( int i = 0; i < SIZE*SIZE; i += 4 )
{
sum[0] += a[i+0];
sum[1] += a[i+1];
sum[2] += a[i+2];
sum[3] += a[i+3];
}
return sum[0] + sum[1] + sum[2] + sum[3];
}
- sumA:
- Pros:
- is more typesafe
- is more generic as it operates only
SIZE
constant defined at top level of module, therefore this constant could be set to other value without need to changesumA
function.
- Cons:
- has worse performance, than
subB
(loops are in order that subsequent adds jump in memory bySIZE*sizeof(int)
bytes each, this is bad for cache. - doesn't support any other screen ratio than 1:1
- has worse performance, than
- Pros:
- subB:
- Pros:
- has better performance (~10x on coliru (gcc) with O3) due that it does manual loop unrolling.
- Cons:
- hardcoded offset of 4 at each loop iteration seems fragile, function do not check if
*a
size is multiple of 4.
- hardcoded offset of 4 at each loop iteration seems fragile, function do not check if
- Pros:
I'd prefer sumA
for most of the cases because it has better readability, is a more struct (it knows the exact dimensions and support other than 1024 if we change constant SIZE).
In case we need the most performat solution, I'd improve and stick to sumB
(e.g., add asserts that SIZE % 4 == 0
or rewrite the "for" loop to scan from a[0][0]
to a[SIZE][SIZE]
).
more info: https://stackoverflow.com/questions/45259490/pereferring-function-with-array-or-pointer
A web service stores user information in a database and uses passwords for authentication. Here's how the user password storing and authentication is implemented in ruby (the actual data storage and retrieval is outside the scope of the example):
require 'digest'
class User
# Use salted passwords
PASSWORD_SALT="trustno1"
# Stored password hash will be accessible through user.hashed_password
attr_accessor :hashed_password
# Authenticates user against given password and returns true
# the password matches the stored one
def verify_password(password)
if hashed_password.nil? || password.nil?
false
else
User.hash_password(password) == hashed_password
end
end
# Changes user's password
def change_password(new_password)
self.hashed_password = User.hash_password(new_password.to_s)
end
# Hashes the input with salt
def self.hash_password(password)
Digest::MD5.hexdigest(password + PASSWORD_SALT)
end
end
- MD5 is not secure for password hashing (its broken and too fast)
- We use static value for hash salt, we need go generate a new random salt per each password
- In general, implementing own password generation and storage is bad idea. Best is use existing (verified and secure) solutions like Rails' builtin
has_secure_password
option.
more info: https://stackoverflow.com/questions/45265003/password-security-problems
function getDepositHistorySum(user) {
const deposits = user.transactions.history.deposits;
let sum = 0;
for (var i = 0; i < deposits.length; i += 1) {
sum += deposits[i].amount;
}
return sum;
}
- In overall, the function is easy to read and understand, it returns a sum of all deposits for given user.
- However, the function relies on correct input. Function need set of checks prior processing input, that there are required fields in input and it they has correct types, so that invalid input will not led to crash/incorrect output.
- The scope of visibility for
var i = 0
is all function becausevar
modificator is used, it may lead to potential errors or unwanted side effects if we'd have large function with the same var used across it. To restrict scope only to selected for loop we may use "let" keyword instead, e.g.:let i = 0
. Array.prototype.reduce
can be used to iterate through the array and sum elements:
console.log(
[1, 2, 3, 4].reduce((a, b) => a + b, 0)) // output: 10
Developer is writing code to transfer credits from one user to another, he has picked MongoDB as database. What database related issues do you find in the transferCredits
function? List two of the most critical issues.
function transferCredits(from, to, amt) {
var fromAccount = db.game_accounts.findOne({"name": from},{"credits": 1});
var toAccount = db.game_accounts.findOne({"name": to},{"credits": 1});
if (fromAccount.credits < amt) {
throw new BalanceError("not enough balance to transfer credits");
}
db.game_accounts.update({name: from}, {$set: {credits: fromAccount.credits - amt}});
db.game_accounts.update({name: to}, {$set: {credits: toAccount.credits + amt}});
}
db.game_accounts.insert({name: "John", credits: 1000});
db.game_accounts.insert({name: "Jane", credits: 1000});
// John transfers credits to Jane
transferCredits("John", "Jane", 100);
db.game_accounts.findOne()
may returnnull
/undefined
if specified field/value not found, thus we need to check for it fromAccount and toAccount before further processing.- Update operations are not isolated into atomic operation (transaction), so that if someone write to either
fromAccount.credits
ortoAccount.credits
between our operations, or some of the updates will fail - we end up with incosistent state of data. As per MongoDB docs, we may usesession.withTransaction
to start a transaction, execute the callback, and commit (or abort on error)
more info:
- https://stackoverflow.com/questions/45279831/finding-database-related-issue
- https://docs.mongodb.com/manual/core/transactions/
Account.prototype.increaseBalance = function(amount, isCredit) {
if (!isCredit) {
this.debitBalance += amount;
} else {
this.creditBalance += amount;
}
};
- It's not easy to judge without context, but I'd propose to have separate functions per each kind of balance increase, so we could avoid
isCredit
check (which could be a potential point of failure) and provide single responsibility per each function. - Another note again is just an assumption due lack of context, but I'd suggest if we increase balance to some account, shouldn't be deducted from another. To demonstrate the idea I implemented following code:
function Account(debitBalance, creditBalance) {
this.debitBalance = debitBalance;
this.creditBalance = creditBalance;
}
Account.prototype.increaseBalance = function(amount, isCredit) {
if (!isCredit) {
this.creditBalance -= amount
this.debitBalance += amount;
} else {
this.debitBalance -= amount
this.creditBalance += amount;
}
};
acc = new Account(100, 200)
console.log(acc) // Account { debitBalance: 100, creditBalance: 200 }
acc.increaseBalance(1, true)
console.log(acc) // Account { debitBalance: 99, creditBalance: 201 }
acc.increaseBalance(2, false)
console.log(acc) // Account { debitBalance: 101, creditBalance: 199 }
Database queries are getting slow when the database size increases. What are some of the typical solutions to improve performance?
- Indexing: Ensure proper indexing for quick access to the database. Avoid full index scan.
- Optimize queries (by analyzing Execution Plan and SQL patterns), e.g.:
SELECT
fields instead of usingSELECT *
- Avoid
SELECT DISTINCT
- Create joins with
INNER JOIN
(notWHERE
) - Use
WHERE
instead ofHAVING
to define filters - Use wildcards at the end of a phrase only, e.g.
LIKE %xx
- Limit stored/fetched data
- Use
LIMIT
to sample query results - Avoid correlated sub queries as it searches row by row, impacting the speed of SQL query processing. Proper JOIN type may work faster.
- Apply sharding to split data across the machines
- Analyze how the data is related and fetched. May it be combined to single table (denormalization) or even migrate to NoSQL db?
- Use
- Define Business requirements / cases
- If the data often fetched or rare? E.g., if the slow SQL query result need once a month, it most likely don't need optimization. We may schedule it to run during off peak-hours.
more info:
- https://www.sisense.com/blog/8-ways-fine-tune-sql-queries-production-databases/
- https://www.sqlshack.com/query-optimization-techniques-in-sql-server-the-basics/
- https://www.mantralabsglobal.com/blog/sql-query-optimization-tips/
Go to the following JSFiddle: http://applifier.github.io/developer-quiz/q8.html. See the code comments for the assignment. Remember to click save and return the url for your fiddle.
My solution: https://jsfiddle.net/niki4/3m46hf0r/ (copy in this repo: statsCollector.js)
Other solutions found:
- https://stackoverflow.com/questions/45309447/calculating-median-javascript
- http://jsfiddle.net/smahat13/zu4s91x3/146/
- https://gist.github.com/hadaytullah/7c5f007239c562dab91010b3152b974b
- https://pastebin.com/DzG5Gued
Go to the following JSFiddle: http://applifier.github.io/developer-quiz/q9.html. See the code comments for the assignment. Remember to click save and return the url for your fiddle.
My solution: https://jsfiddle.net/niki4/bwf56eop/4/ (copy in this repo: getRecentConversationSummaries.js)
Other solutions found: