Can consul-replicate be used with Vault? #674

Closed
justintime opened this Issue Oct 6, 2015 · 54 comments

Projects

None yet

5 participants

@justintime

Our use case dictates that we need to have Vault and it's secrets available in multiple physical datacenters. However, writes to Vault only need to happen at one location. This makes for a very typical "one master with many slaves" arrangement.

Since we're using consul for the backend, I figured consul-replicate would be the perfect tool for the job. After setting it all up and running consul-replicate, I was happy to have initial success. However, after using it a bit more, I think I've found a showstopper for this setup.

It seems that each time consul-replicate copies a key/value from one consul dc to another, I have to restart the vault servers in the second datacenter before they will "see" that data.

Is this a known issue? Is there any workarounds, or another way to accomplish this kind of setup?

@jefferai
Member
jefferai commented Oct 6, 2015

It's not really a known issue in that this isn't an officially supported setup.

My guess is that you're running into the fact that Vault contains a read cache from the physical backend. I would imagine you're seeing this problem mostly for changed data, not new data.

In that case, your best current option would be to maintain a local fork that disables the cache. This will considerably impact performance, however.

@justintime

That's what I wondered as well. Our use case for vault is simple, it's an encrypted key-value store that's referenced on app startup, and that's just about it. We run consul on the same hosts that we run vault on, so network RTT would be non-existant. I'll see if I can disable the read cache and see what the impact is.

I found an issue over in consul-replicate (referenced above) where it looked like someone else was using it to replicate vault data, I've posted there to see if he had more luck than I did. Also, there's another post from the mailing list where it looked like someone else had the same use case, and some other potential issues were brought up: https://groups.google.com/forum/#!msg/vault-tool/cnNqgpZxvh0/naMaRHXZAgAJ

To that point, do you think I could achieve what I'm striving for by using a different storage backend?

@roycrowder

I've had success with the replication across sites with consul-replicate and haven't run into a caching issue. However, I use the secondary site in more of a DR scenario so we don't have people pulling from it as often. I can try doing some more testing on my side and see if I get the same scenario.

How long does it take for you to get caching issues? You say you saw initial success, is there some time frame in which the remote instance stops picking up the new values?

@jefferai
Member
jefferai commented Oct 7, 2015

@justintime BTW, I marked this as enhancement/thinking earlier because I think it should be pretty easy to make the cache a tunable. It would be quite useful to know if that's actually the source of your issue though. If you like I can help you with a patch to disable it locally for you and you can see if it fixes your problems.

@justintime

@roycrowder I think the only reason I saw initial success was because I hadn't started vault in my destination DC yet. I wanted to replicate, and then make sure that the start->unseal->read workflow worked. That did work. I'm not for 100% sure if it was an add or an edit, but basically any changes seemed to not show up until after I restarted and unsealed the vault leader. I'm going to setup some vagrant boxes that I can test this more easily with to make sure.

My guess is that if you have vault running in both places, you can add a new k/v and see it replicate. You'll likely be able to read that from the destination vault as well. Changing the value on that same key will likely cause a replication event, but you won't be able to read that new value from the destination vault until you restart. If you can check that and report back, it would help a lot.

@jefferai If you have the time to generate a patch, I'll make the time to test it out.

@sethvargo
Member

Hey everyone,

Chiming in here from the Consul Replicate (CR) side of things 😄. Putting any Vault-specific issues aside, CR is designed to mirror a particular prefix from one Consul DC to another. If you are using Consul as the backend storage mechanism, CR will definitely replicate the data.

There are a few things I would like to mention. CR will only replicate one-way. It can replicate to multiple data-centers, but there has to be one "primary" data center. Consider 3 data centers A, B, and C. B and C both pull replication data from A, so A is the "primary" data center. Any data written to B or C's KV store that falls under the replicated prefix will not be replicated back to A or to it's peer follower. In fact, during the next replication, A will likely overwrite that change in B or C's KV to match A.

Assuming Vault's cache is disabled, this means if there's a Vault instance in each DC, any writes to Vault-B and Vault-C would effectively be overwritten on the next change to the KV prefix in DC-A. This behavior might be okay if Vault-B and Vault-C are read-only, but I think even that behavior may be undefined. Similarly, depending on the number of keys watched, replication can occur quite frequently and can take up to a few seconds to complete. This opens up a race condition where Vault-A writes data like a new transit key (Vault-B and Vault-C have an old key), and an application requests a decryption of a secret that was generated via Vault-A's new transit backend key on Vault-B. The decryption would fail because the update Vault data (the new transit key) hadn't yet been populated by CR.

@jefferai
Member
jefferai commented Oct 7, 2015

To echo what @sethvargo just mentioned, I'm happy to work on a patch to test whether disabling the cache helps things out, and I'll try to post one later today, but there are many potential drawbacks to replication and you'll need to be careful. This would be a purely read-only replication and you'd need to limit it to specific prefixes. As I said on the ML thread, you would not, for instance, want to be mirroring your expiration data or you could end up having many revocations attempting to happen for every lease.

You'd also probably want to build some robustness into your applications, or think carefully about the configuration knobs set within Vault. For instance, for the transit key example, you'd need to not set your minimum allowed decryption version to the current value, so that if it takes a bit to populate a rotated key applications can still decrypt data -- much less if this is an actual new key with no prior versions.

This is definitely a "hope it works but it's definitely not officially supported" scenario :-)

@justintime

Yeah, I think everything Seth mentioned still fits my "master with
read-only slaves" scenario. Can you speak to whether or not the S3 (or
even the mysql) backend might be a better fit for me? I certainly don't
need the scalability of Consul, at most we'd be looking at a couple hundred
requests per day.

If vault is aware of a "mtime" change on S3 or MySQL and flushes it's read
cache, that's probably all I need.

On Wed, Oct 7, 2015 at 10:51 AM, Jeff Mitchell notifications@github.com
wrote:

To echo what @sethvargo https://github.com/sethvargo just mentioned,
I'm happy to work on a patch to test whether disabling the cache helps
things out, and I'll try to post one later today, but there are many
potential drawbacks to replication and you'll need to be careful. This
would be a purely read-only replication and you'd need to limit it to
specific prefixes. As I said on the ML thread, you would not, for instance,
want to be mirroring your expiration data or you could end up having many
revocations attempting to happen for every lease.

You'd also probably want to build some robustness into your applications,
or think carefully about the configuration knobs set within Vault. For
instance, for the transit key example, you'd need to not set your minimum
allowed decryption version to the current value, so that if it takes a bit
to populate a rotated key applications can still decrypt data -- much less
if this is an actual new key with no prior versions.

This is definitely a "hope it works but it's definitely not officially
supported" scenario :-)


Reply to this email directly or view it on GitHub
#674 (comment).

@jefferai
Member
jefferai commented Oct 7, 2015

@justintime The biggest probable issues with this are not really in Consul, per se -- they're with things like propagating leases across to other datacenters. That's a concern regardless of the backend choice. So the main concern here is probably that some backends will make it easier or harder to sync only the proper entries.

It turns out that it's actually easy to disable the cache -- it's just not exposed in configuration currently. Open up vault/core.go and in the function NewCore simply remove the if statement checking conf.DisableCache; it currently looks like this:

        // Wrap the backend in a cache unless disabled
        if !conf.DisableCache {
                _, isCache := conf.Physical.(*physical.Cache)
                _, isInmem := conf.Physical.(*physical.InmemBackend)
                if !isCache && !isInmem {
                        cache := physical.NewCache(conf.Physical, conf.CacheSize)
                        conf.Physical = cache
                }
        }

After removing the check for conf.DisableCache and rebuilding, the physical cache will be disabled. I'd be interested in knowing how this works for you. If it solves your problem we could possibly expose this in configuration.

@jefferai jefferai added this to the 0.4.0 milestone Oct 7, 2015
@justintime

I'm having troubles getting a build to work. I have 0.2.0 deployed in my source datacenter, but I'm unable to get v0.2.0 to build due to some godep issues?

I made the mistake of running 0.3.1 in my destination datacenter, but discovered quickly that version mismatches are not a good thing :) Any tips on getting the 0.2.0 release to build?

DC-MB-AM-002:vault justine$ make dev
go generate ./...
==> Getting dependencies...
# github.com/hashicorp/vault/physical
physical/s3.go:59: undefined: credentials.EC2RoleProvider
physical/s3.go:64: cannot use region (type string) as type *string in field value
make: *** [dev] Error 2
@jefferai
Member
jefferai commented Oct 7, 2015

make dev tries to get dependencies via normal go get means, but their APIs have changed.

Make sure you have godep installed, then try godep go install from the Vault source dir. That should work.

@justintime

the godep go install didn't fail, but the build did afterwards. Pretty sure I'm running into #536

I'm not so sure that disabling the cache is going to fix my problems anyway. I saw in the code where reads on keys that didn't exist weren't cached. However, adding a new key, replicating it, and attempting to read it on the remote was still failing. That seems to indicate there's other/more things preventing me from replication. I wonder if I've missed something -- @roycrowder I'd love to hear if you found any similar issues when you get the time to check.

@jefferai
Member
jefferai commented Oct 8, 2015

@justintime it might be interesting to attempt a read from two other places when the read is failing on Vault as you're attempting to do it now:

  1. Directly from Consul; you'll get an encrypted version of the value, but simply knowing that the read works is something.
  2. Through Vault's sys/raw endpoint. You'll need to figure out the path in Consul where the value is stored; this will involve some sleuthing to figure out the UUID. In my example, I found that the UUID corresponding to my test instance's secret backend was 03858025-56b4-1537-e5d1-c3f581ca0e0a, so I was able to run it (via a root token) like this:
http GET http://127.0.0.1:8200/v1/sys/raw/logical/03858025-56b4-1537-e5d1-c3f581ca0e0a/foo X-Vault-Token:$(cat ~/.vault-token)
HTTP/1.1 200 OK
Content-Length: 118
Content-Type: application/json
Date: Thu, 08 Oct 2015 00:23:46 GMT

{
    "auth": null, 
    "data": {
        "value": "{\"bar\":\"gaz\"}"
    }, 
    "lease_duration": 0, 
    "lease_id": "", 
    "renewable": false, 
    "warnings": null
}

If (1) works but (2) doesn't that will tell us something, because (2) is a straight passthrough to the physical storage (although it does go through the cache).

If both (1) and (2) work, try it with a normal read again and see if it works -- maybe by that time something will have finished catching up/propagating.

Let me know!

@justintime

I was having some weird issues running CR in '-once' mode, so I wiped the remote consul DC store, and started over.

Good news, adding a new key to the source almost immediately replicates up, and is readable by the destination vault without a restart. As you guessed, updating an existing entry replicates (at least according to CR verbose logs), but Vault serves up the old data.

I'll see if I can catch you on IRC to see about building a modified v0.2.0 to see if disabling the read cache fixes that.

@justintime

Disabling the cache in vault did indeed fix the problem. For documentation, I just removed the '!' on https://github.com/hashicorp/vault/blob/master/vault/core.go#L302 and recompiled.

At least for my use case, exposing DisableCache in vault.hcl would be awesome. Even better would be an official, supported way to do what I'm doing, but I'll take what I can get :)

@jefferai jefferai added accepted and removed future thinking labels Oct 8, 2015
@jefferai jefferai closed this in 8201ae0 Oct 12, 2015
@jefferai
Member

@justintime I believe that should do it -- simply set disable_cache in the configuration file to true. Please let me know if it's not working for you!

@justintime

Awesome. I'll build it and test it today. You had mentioned that master was in a good state the other day, I'll hold you to that statement :)

@justintime

Just to follow up, this functionality is working as advertised for me. Thanks for the help.

@jefferai
Member

Great! And I'm looking at #706 separately.

@justintime

Hi Jeff,

Rather than try to explain what I'm seeing, I'll just paste you the results from some scripted tests I setup.

Here's the "read" script:

for domain in sourcedc.com destdc1.com destdc2.com destdc3.com; do
  for i in 1 2 3; do
    export VAULT_ADDR="https://vault${i}.${domain}:8200"
    STATUS=`vault status 2>&1 | grep Mode | awk '{print $2}'`
    VALUE=`vault read secret/password1 2>&1 | grep value | awk '{ print $2 }'`
    if [ -z "${VALUE}" ]; then
      VALUE='ERROR'
    fi
    echo "${VAULT_ADDR}(${STATUS}): ${VALUE}"
  done
done
  1. Restart all vault instances in all data centers, and unseal them all
  2. Check sanity and make sure all instances agree on a value by running the read script:
https://vault1.sourcedc.com:8200(active): 1445443281
https://vault2.sourcedc.com:8200(standby): 1445443281
https://vault3.sourcedc.com:8200(standby): 1445443281
https://vault1.destdc1.com:8200(active): 1445443281
https://vault2.destdc1.com:8200(standby): 1445443281
https://vault3.destdc1.com:8200(standby): 1445443281
https://vault1.destdc2.com:8200(active): 1445443281
https://vault2.destdc2.com:8200(standby): 1445443281
https://vault3.destdc2.com:8200(standby): 1445443281
https://vault1.destdc3.com:8200(active): 1445443281
https://vault2.destdc3.com:8200(standby): 1445443281
https://vault3.destdc3.com:8200(standby): 1445443281
  1. Write a new value to the source dc:
export VAULT_ADDR="https://vault1.sourcedc.com:8200"
vault write secret/password1 value="`date +%s`"
  1. Check each instance with the read script:
https://vault1.sourcedc.com:8200(active): 1445443599
https://vault2.sourcedc.com:8200(standby): 1445443599
https://vault3.sourcedc.com:8200(standby): 1445443599
https://vault1.destdc1.com:8200(active): 1445443281
https://vault2.destdc1.com:8200(standby): 1445443599
https://vault3.destdc1.com:8200(standby): 1445443599
https://vault1.destdc2.com:8200(active): 1445443281
https://vault2.destdc2.com:8200(standby): 1445443599
https://vault3.destdc2.com:8200(standby): 1445443599
https://vault1.destdc3.com:8200(active): 1445443281
https://vault2.destdc3.com:8200(standby): 1445443599
https://vault3.destdc3.com:8200(standby): 1445443599

As you can see, all the standby instances are getting the updated values, but the active leaders in the destination DC's are holding onto the old value. I have verified that disable_cache=true is present on all nodes.

What really stumps me here, is that I thought that standby vault's never actually answered anything themselves, and rather forward the client over to the active vault in an HA setup???

@jefferai
Member

Yes, that should be true.

Any chance you can run the script again, but remove all of the grep, awk, and 2>&1 redirection? My guess is that your standbys are actually forwarding requests to the Vault master in sourcedc.com.

@justintime

Hmm, do you get any notification when you're getting redirected? I don't seem to:

https://vault1.sourcedc.com:8200
Key             Value
lease_duration  2592000
value           1445443599


https://vault2.sourcedc.com:8200
Key             Value
lease_duration  2592000
value           1445443599


https://vault3.sourcedc.com:8200
Key             Value
lease_duration  2592000
value           1445443599


https://vault1.destdc1.com:8200
Key             Value
lease_duration  2592000
value           1445443281


https://vault2.destdc1.com:8200
Key             Value
lease_duration  2592000
value           1445443599


https://vault3.destdc1.com:8200
Key             Value
lease_duration  2592000
value           1445443599


https://vault1.destdc2.com:8200
Key             Value
lease_duration  2592000
value           1445443281


https://vault2.destdc2.com:8200
Key             Value
lease_duration  2592000
value           1445443599


https://vault3.destdc2.com:8200
Key             Value
lease_duration  2592000
value           1445443599


https://vault1.destdc3.com:8200
Key             Value
lease_duration  2592000
value           1445443281


https://vault2.destdc3.com:8200
Key             Value
lease_duration  2592000
value           1445443599


https://vault3.destdc3.com:8200
Key             Value
lease_duration  2592000
value           1445443599
@jefferai
Member

Not with the CLI; if you use an HTTP call instead (via curl for instance), it will either be printed out in normal output or if you use -v (I forget which).

@justintime

Yep, I think I about ready to let this idea go.

The "leaders" in the destination DC's all think they're the leaders, and are active, but the standby instances are all pointing to sourcedc's leader. The net effect is that the 'leader' in the destdc will answer directly (with the old value), but the standby nodes will redirect over to the leader in sourcedc, which answers with the new value.

I'm sure that the standby's are getting the leader info from Consul, and since I'm replicating that data forward from sourcedc, I'm overwriting whatever gets put there from the election that happens at startup in the destination dc's.

This likely means that the disable_cache option isn't quite working right either. I think that what I'm wanting to do won't work without a fair amount of coding to support it in vault, and I understand that's not something you all are aiming for at the moment. I'm going to just recommend we either live with relying on the VPN being up, or manually create our secrets in all dc's.

Thanks for all your help on this, Jeff, you've been awesome.

@jefferai
Member

It may still be worth some fiddling...there are two separate issues here:

One is that you're replicating all data. As I said before, you'd need to limit your replication to specific prefixes and keys. What's happening is that you have a local cluster in the target DC, the data is being sent over via consul-replicate, and along with that is coming the key with the information about which node is the leader. That's not the only key that will cause problems; you definitely do not want to replicate expiration information across. From some looking at a basic layout with the file physical backend, you'd want to copy /sys/token but not /sys/expire, yes to /logical/, and yes much of /core but not /core/leader and not /core/lock.

The other problem is the new value not showing up on the leaders in the other DCs. That should have been solved by disabling the cache, but are you sure that the right prefixes are being copied over with consul-replicate? Does it work after restarting Vault in those DCs?

@jefferai jefferai reopened this Oct 21, 2015
@justintime

Well, I'm game for fiddling if you are. Just let me know if you think we've crossed the point of needing custom development.

Regarding what I'm replicating, I'm simply replicating everything under /vault. I hadn't poked around in there, would you like me to setup the specific prefixes you listed above instead?

Regarding the cache, yes, restarting and unsealing all the vaults results in them all agreeing again. The slaves in the remote DC's still are sending a 307 redirect back to the leader in the source DC, but the "leaders" in the remote DC's are answering with the updated value.

@jefferai
Member

The values I gave above should be a good baseline. You definitely do not want /core/leader or /core/lock and if you replicate /sys/expire you'll have multiple DCs all trying to revoke the same leases, which is a very bad idea.

Earlier in testing (#674 (comment)) you said that forcing the cache off worked. Did something change, or were you only testing on standbys (which were redirecting to the leader in the source DC), or...? I'm trying to figure out if I didn't plumb the config option through correctly, somehow, or if it never really worked properly to begin with.

@justintime

Unfortunately, I didn't check whether I was hitting the standby's or the leaders in my earlier testing. I saw success and celebrated :/

@justintime

Here's what I came up with:

  { 'source' => 'vault/sys/token@kry-default' },
  { 'source' => 'vault/sys/policy@kry-default' },
  { 'source' => 'vault/logical@kry-default' },
  { 'source' => 'vault/audit@kry-default' },
  { 'source' => 'vault/core/audit@kry-default' },
  { 'source' => 'vault/core/auth@kry-default' },
  { 'source' => 'vault/core/keyring@kry-default' },
  { 'source' => 'vault/core/mounts@kry-default' },
  { 'source' => 'vault/core/seal-config@kry-default' }

Let me know if I need to change that, I'll work on pushing that out this afternoon.

@jefferai
Member

I think that looks fine. You may not want to replicate vault/audit and vault/core/audit unless you have the same audit config everywhere (to the same file location for instance).

@justintime

After cleaning /var/lib/consul/* and starting fresh with the above settings, I'm getting at least getting consistent results.

I verified with curl that I'm now getting redirected to the proper leader in each DC rather than the leader of the source DC. After updating the value in the source DC and running my read script, I see all vaults in the source DC have the new value, but all other vaults have the old value.

After restarting and unsealing all Vaults, they all agree again on the new value. I'm starting to think we didn't quite get the disable_cache functionality 100% - what's your thoughts?

@jefferai
Member

Is this using the config value or making that one-line change you made before?

@justintime

config value:

disable_cache = true
@jefferai
Member

Any chance you can test with the same code change you tried before? I don't see why the config value wouldn't be populating through, but just in case...

@justintime

Progress! I did nothing but remove the ! in the if !conf.DisableCache { in core.go, recompile, and pushed it to one of the 3 remote DC's. I updated multiple times, each time the vaults with the modified binary answered with the updated value while the other ones answered with the old value. So, that code is doing what it's supposed to, but setting the value in the conf file must not be making it to the conf object?

@jefferai
Member

Great; I'll dig in there. I did write some (passing) unit tests to ensure that the value was being read from the config files, so somewhere it seems like it's getting lost.

@jefferai
Member

@justintime Sorry for the delay on this...weekend + daycare woes.

I just pushed a change to master that should fix this. I totally missed a spot where I needed to plumb it through -- so the config file reading and merging was working in the tests, but was not actually being passed through to the final core.

Let me know!

@justintime

Just verified this works as advertised, thanks so much for all of your help!

@justintime justintime closed this Oct 29, 2015
@jefferai
Member

@justintime Once you get some experience with this it may be cool for you to put a writeup of your setup and running notes into the mailing list. Other people may want to do something similar.

@mfischer-zd
Contributor

Yes, please -- we would love to have a single Vault master in a multi-DC environment.

@justintime

@jefferai I'll try and put together a (gasp) new blog post about it. Is it
safe to assume the disable_cache feature will make it into 0.3.2?

On Fri, Oct 30, 2015 at 10:16 AM, Michael S. Fischer <
notifications@github.com> wrote:

Yes, please -- we would love to have a single Vault master in a multi-DC
environment.


Reply to this email directly or view it on GitHub
#674 (comment).

@jefferai
Member

The next planned release is 0.4, but yes, it will be in there. While I'd love to see this post, I do have a couple of requests, to ensure that people trying to follow in your footsteps don't run into problems:

  • Please note that it's not officially supported...you're on your own
  • Please note the dangers of copying over the entire Consul tree as per my earlier comments in this issue

Thanks!

@mfischer-zd
Contributor

It won't be supported in 0.4, or it won't be supported in the current release?

@jefferai
Member

Clash of terminology. The ability to turn off the cache will be present in 0.4, hence this functionality will be supported. The behavior of using consul-replicate in this way to distribute data to multiple datacenters will not be officially supported. If problems are hit that are fixable (such as adding the flag to disable the read cache), I'm happy to work to fix them, but the overall solution is at your own risk, at least for now.

@mfischer-zd
Contributor

OK, will wait until it's officially supported.

@jefferai
Member

There are currently no plans on the roadmap to officially support this kind of replication, so it might be a long wait. However, a better way to look at it might be that there is a not-officially-written-down-but-listed-above-in-the-comments set of prefixes and keys that should not be replicated to any read-only set of Vault servers. Maybe at some point that list can become more official, but then it lets users manage the replication however they wish, as appropriate for their chosen storage backend.

@mfischer-zd
Contributor

There are currently no plans on the roadmap to officially support this kind of replication, so it might be a long wait.

While I appreciate @justintime 's research in this matter, I'm reticent to implement a solution that may not work with a future Vault release.

Can we get this on the roadmap, please?

@jefferai
Member
jefferai commented Nov 2, 2015

@mfischer-zd consul-replicate can real-time stream data from one Consul cluster to another. There are no plans to a streaming-replication solution in Vault, especially when a well-tested one exists at the storage level.

The set of paths one would want to replicate will vary from one installation to the next. The only thing that is fairly constant are the things you don't want to replicate. That could be codified into a set of best practices, but any such best practices document would continue to inform the user that messing about with the storage layer of Vault will never be supported.

@mfischer-zd
Contributor

There are no plans to a streaming-replication solution in Vault, especially when a well-tested one exists at the storage level.

It's fine, in my view, to declare that real-time replication is supported only when Consul is the storage backend and consul-replicate is used to replicate the data.

messing about with the storage layer of Vault will never be supported.

This is what worries me. You're not willing to make any stability guarantees here?

@jefferai
Member
jefferai commented Nov 2, 2015

I cannot promise that Vault's internal data store will never change. This means that, if relying upon some specifics of the layout of the internal data store, you'd want to perform due diligence during an upgrade cycle -- the same as any other aspect of Vault that undergoes changes from one version to the next.

@mfischer-zd
Contributor

How about notices to replication users in release notes, or bumping the major version per semver when the storage layout changes?

@jefferai
Member
jefferai commented Nov 2, 2015

Vault is still very young and nowhere near 1.0, much less a major version bump, regardless of what changes. Storage level changes would never happen in anything less than a minor version bump, currently.

@justintime

When I run something with the latest version of 0.3.1 in production, I fully expect to have to put a lot of extra rigor into even minor versions upgrades. Comes with the territory, IMO.

I'll make 0.3.1 work now, and when I upgrade, it will have weeks of testing in dev and staging before it goes to production.

@ntnn ntnn referenced this issue in hashicorp/consul-replicate Sep 14, 2016
Closed

Vault with Consul #55

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment