-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(#12): cloud-based execution of move contact #177
Conversation
d28373b
to
d8eb298
Compare
d8eb298
to
d770b31
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really good work here - great to see a compose file that I suspect works out of the box!
My big question is how we're building and publishing images. I have some smaller feedback points and tried to make code ```suggestion
s when possible - but lemme know if I got any of those wrong!
Thanks @mrjones-plip for this great review, I will be going through them one by one and get back to you. cc: @kennsippell |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing work Paul!
Thanks @kennsippell , great review. Really appreciate. Will be making the right adjustment |
Testing today:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great stuff Paul. I did another review here but I got tired and didn't look through tests yet.
Hi, @mrjones-plip it took a bit but finally got all the docker feedback addressed 🥲. Would you like to check it back when ever you have time? @kennsippell, i have fixed the delay stuff as well. Thanks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very very nice progress. This feels close.
job.log(`[${new Date().toISOString()}]: ${result.message}`); | ||
const errorMessage = `Job ${job.id} failed with the following error: ${result.message}`; | ||
console.error(errorMessage); | ||
throw new Error(errorMessage); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at some point it might be nice to handle errors a bit better. maybe not in this PR, but what if cht-conf has an error - the most common scenario here is losing connectivity with the server during a heavy move. should the job be retried?
i'll make a quick comment about aborted/canceled jobs or generally handling any case where a contact's move is interrupted
- the good news is that if a move is interrupted, regardless of the state cht-conf can be rerun. it will aggregate all documents (those in the new position already and those in the old position) and move them all again. this is really an incredible win - if a job fails, it just needs to be retried.
- the bad news is that cht-user-management won't let you retry an aborted job if the top-level contact successfully moves. it will give an error like "A already has B as parent" or whatever.
It's going to happen, and we're going to need to deal with it at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really important, the way I was dealing with it is to see from the Board
, the issue with the failed job (from the logs).
If it has failed due to the lost of connectivity, then I just retry
from the Board
.
But we can handle this properly in another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you make an issue so we don't forget?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes! Looking really good here. Most of my feedback is trying both make it easy to set up the dev environment, while also keeping sane defaults for production.
While I did easily stand up an instance using the bash script, I'm unable to connect to redis
for some reason.
both the main service and the work service have this error, where 192.168.16.2
is the IP of the redis
container:
docker logs cht-user-management-cht-user-management-1
Error: connect ECONNREFUSED 192.168.16.2:6378
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1606:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '192.168.16.2',
port: 6378
}
does this work for you?
Thanks @mrjones-plip and @kennsippell for your great reviews 🔥. Really appreciate that, will go through it thoroughly. |
Co-authored-by: Ashley <8253488+mrjones-plip@users.noreply.github.com> Co-authored-by: Kenn Sippell <kennsippell@gmail.com>
d88902a
to
c5cc101
Compare
Yes @mrjones-plip, it works for me, we can please have a little sync on that if its okay with you. |
594f62f
to
31e91d1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do one more, but I think we're basically there.
job.log(`[${new Date().toISOString()}]: ${result.message}`); | ||
const errorMessage = `Job ${job.id} failed with the following error: ${result.message}`; | ||
console.error(errorMessage); | ||
throw new Error(errorMessage); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you make an issue so we don't forget?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work @paulpascal! I REALLY love that NODE_ENV
is exposed in the .env
file now so it makes it trivial to run in dev mode in docker - so handy!
I'm not getting any more errors for redis
service in the docker log, so I assume this is working? I don't know how to test.
I did have one issue that I wasn't able log in - here's what I did:
- deploy
4.8.1
instance with docker helper192-168-68-26.local-ip.medicmobile.org:10446
- set
CHT_DEV_HTTP=true
andCHT_DEV_URL_PORT=192-168-68-26.local-ip.medicmobile.org:10446
andNODE_ENV=dev
- in CHT Core, create a
test
user that is roleUser Manager
- start user man tool with:
./docker-local-setup.sh build
- try to login with that user on
127.0.0.1:3000/login
expected: I can log in
actual: get error "login failed. please try again"
Here's my .env
in case it's helpful:
NODE_ENV=dev
COOKIE_PRIVATE_KEY=quux
QUEUE_PRIVATE_KEY=smang
CONFIG_NAME=chis-ke
PORT=3000
EXTERNAL_PORT=3000
INTERFACE=0.0.0.0
CHT_DEV_HTTP=true
CHT_DEV_URL_PORT=192-168-68-26.local-ip.medicmobile.org:10446
Co-authored-by: mrjones <8253488+mrjones-plip@users.noreply.github.com> Co-authored-by: Kenn Sippell <kennsippell@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a docker and docker compose perspective, with the updated env.example
and updated readme.md
- I think this PR is ready to go from my perspective. I still was unable to log in, so please be sure and test this with a clean set up to ensure there's no regressions!
Thanks @mrjones-plip , I will make sure to have that working. |
7b512fe
to
ef2573a
Compare
ef2573a
to
cf89625
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing! Let's go!
Closes: #12
This PR introduces a cloud-based execution of the
move-contact
feature, enhancing user experience by replacing the previouscht-conf
command-line approach.To achieve this, we've implemented bullmq, a powerful and flexible queue system built on Redis.
Here's a breakdown of the process:
Upon initiating a contact move, essential details such as
contact_id
,parent_id
,instanceUrl
, and the currentsessionToken
(encrypted) are compiled and submitted as a job to theMOVE_CONTACT_QUEUE
.A dedicated worker monitors this queue, processing incoming jobs:
cht-conf
command via achild_process
, utilizing all pertinent job data, including the decryptedsessionToken
.Upon successful execution, the job is marked as completed. In case of any failures, the job is flagged accordingly. Users can conveniently track job statuses directly from the user management tool's dashboard.
Note
Enabling
cht-conf
to support session tokens required implementing some modifications, detailed here.(the session token is initial got from
_session
request)