Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Insufficient support for Multireddits #344

Closed
voussoir opened this issue Nov 15, 2014 · 36 comments
Closed

Insufficient support for Multireddits #344

voussoir opened this issue Nov 15, 2014 · 36 comments
Labels
Feature New feature or request

Comments

@voussoir
Copy link
Contributor

I've done several searches throughout the repo, and as far as I can tell there aren't any resources for handling Multireddits. I'd love to help develop this if a project manager can get some groundwork started. Any plans? (Is there a to-do file somewhere?)

Multireddit API

@bboe bboe added the Feature New feature or request label Nov 15, 2014
@bboe bboe added this to the praw3 milestone Nov 15, 2014
@bboe
Copy link
Member

bboe commented Nov 15, 2014

I'm adding some TODO items for this task. I would suggest starting simple and getting the list of a multis working first (make a single PR for that). Then make a new PR for each feature. Tests will need to be added for each function to ensure it works.

  • Add praw.objects.MultiReddit class. This will contains the MultiReddit relevent methods.
  • Add praw.__init__.MySubredditsMixin.get_my_multis() that returns instances of the MultiReddit class
  • Add instance method praw.objects.MultiReddit.delete() (should fail if authenticated redditor is not the owner)
  • Add instance method praw.objects.MultiReddit.copy(to) (should fail if the to named multi already exists)

@voussoir
Copy link
Contributor Author

Hi bboe,

I'm glad to be working on this! I will be ready for the initial PR as soon as get_my_multis() is working and I make test cases, but I have a few questions for you:

  1. get_my_multis() is crashing on root = page_data.get() because the endpoint to get my multis returns information differently than getting my subreddits
    Getting Multis returns a list of multiple kind=LabeledMulti
    Getting subscriptions returns a kind=Listing with multiple subreddits
    So page_data.get() doesn't like the fact that it's being given a list. I didn't want to make any changes to get_content without asking you about it, since it's such a central part of PRAW. How should I solve this problem?
  2. Using get_multireddit() for individual multis is working okay so far. It makes the object and I'm able to see the created attributes, but the subreddit attribute is still a list of dicts, {"name": "redditdev"}, {"name": "botwatch"}. How do I make it so the subreddit attribute is a list of Subreddit objects? I've been looking around and just can't figure that out. It probably has to do with the funny structure they're being given as, since they don't have the kind=T5 for Praw to auto-convert them or whatever you've done.

Thanks!

@bboe
Copy link
Member

bboe commented Nov 18, 2014

A few things first. The branches you are working on do not align with PRAW's branches, which will make creating a PR very difficult. Fixing this will be kind of a pain.

Step 1, is to make sure that your master branch is always in-sync with the primary master branch. First, assuming you don't have any extra work on your master branch, run the following:

git checkout master
git remote add upstream https://github.com/praw-dev/praw.git
git fetch upstream
git reset --hard upstream/master
git push --force

There are a number of ways to fix your multi-reddit branch. We're going to do it by creating a new branch (so we have the original in case we mess up):

# From the master branch
git checkout -b multis
git cherry-pick 36ee352def1b80c0a89da946457b6a074ea5e905

36ee352def1b80c0a89da946457b6a074ea5e905 is the commit hash for the only new change you've made to your branch. If you add more changes, you'll want to add their hash as well. At this point, for each commit, you'll need to resolve the conflicts. To find the conflicts run git status and find the files under the "both modified" section. Open those files, and find the sections that look like:

<<<<<<< HEAD                                                                             
...
=======
...
>>>>>>> ...

Unless you know you specifically changed something in the second section, then use only what's from the first section. Then, for each conflicted file run git add {file} where {file} is the name of the file that you had to resolve conflicts for, and finally run git cherry-pick --continue. If you specified multiple commits to cherry-pick you may need to repeat this conflict resolution process.

Finally, once your new branch is fixed, push that branch to your remote git push -u origin multis and verify that the changes committed are all intended changes. If so, then you can delete your original branch that is desynchronized from the "proper" master.

Once you've fixed that, for #1, try passing in root_field=None when calling get_content and see if that helps. If not, and you feel you're stuck just make sure you have everything you're working on based properly off the master branch, and when I have time, I can work out the few pieces you are stuck on. Thanks!

@voussoir
Copy link
Contributor Author

Dammit, I hadn't even thought about looking for changes first, you're way too patient with me. Well, everything is in the right place now.

Problem 1 still isn't happy. Line 491 produces a list, and it's immediately tossed into 498 which is trying to do list.get(). There will need to be a special case for multis that knows what to do with the list. I could probably make a miniature version of get_content() specifically for multireddits, and put it right inside the get_my_multis function, or I could do something inside of get_content() itself, but that's why I wanted to ask for your recommendation. I'm not exactly stuck yet, but I want to make sure I do this right.

I have solved out problem 2 by adding this line to the end of the Multireddit constructor, so that's pretty cool.

self.subreddits = [Subreddit(reddit_session, x['name']) for x in self.subreddits]

You may have a preferred way of doing this, I'm happy to change it. I don't exactly understand at what point the text for things like "author" are converted into Redditor objects, etc but I'm obviously lacking it.

@bboe
Copy link
Member

bboe commented Nov 19, 2014

Given that /api/multi/mine does not appear to take the limit parameter, I don't think using get_content is the correct approach. I would suggest just iterating over the response from self.request_json(url) with no parameters.

Maybe make a comment as to why get_content is not being used.

@bboe
Copy link
Member

bboe commented Nov 19, 2014

I just filed https://github.com/reddit/reddit/issues/1177. Let's proceed as I indicated for now, and hopefully in the future we'll be able to use get_content instead.

@voussoir
Copy link
Contributor Author

Thanks, that's exactly the answer I was looking for. Function get_my_multis is now fully operational. I have created a multireddit on your test user, http://reddit.com/user/pyapitestuser2/m/publicempty, which should remain public and empty to be used in the tests.

Pull 345

Edit: Pull 345 closed. Will start on MR creation, copying deletion for future pull

@voussoir
Copy link
Contributor Author

Hi bboe

I'm trying to create multireddits, but my function appears to be tripping over the cache. One of the POST params is a list and it's creating "Unhashable Type" errors when checking the praw cache.

r.create_multireddit()

>>> a=r.create_multireddit('mangoes', subreddits=['botwatch'])
{'visibility': 'private', 'subreddits': [{'name': 'botwatch'}], 'name': 'mangoes
'}
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python34\lib\site-packages\praw\decorators.py", line 323, in wrapped
    return function(cls, *args, **kwargs)
  File "C:\Python34\lib\site-packages\praw\__init__.py", line 2240, in create_mu
ltireddit
    return self.request_json(self.config['multireddit_about'], data=data)
  File "C:\Python34\lib\site-packages\praw\decorators.py", line 161, in wrapped
    return_value = function(reddit_session, *args, **kwargs)
  File "C:\Python34\lib\site-packages\praw\__init__.py", line 540, in request_js
on
    response = self._request(url, params, data)
  File "C:\Python34\lib\site-packages\praw\__init__.py", line 395, in _request
    response = handle_redirect()
  File "C:\Python34\lib\site-packages\praw\__init__.py", line 368, in handle_red
irect
    timeout=timeout, **kwargs)
  File "C:\Python34\lib\site-packages\praw\handlers.py", line 130, in wrapped
    if _cache_key in cls.cache:
TypeError: unhashable type: 'list'
>>>

This is not an issue with the API because I had a typo in the url for a very long time and it never noticed, meaning it wasn't even making it to the actual request stage. I went searching for some reference, but I don't think any other function uses lists in this way.

Any advice?

Create MR API

Also, I figure I should ask while I have your attention: While Submission, Account and Stylesheet-Image deletion each have special delete endpoints, MR deletion uses the same endpoint as fetching its info. With this in mind, what should I put into data so that request_json knows I want to delete it?

With your help on these two questions it's only a matter of time until I have Create, Delete, Copy and Rename ready to go (This time I'll make sure I'm up-to-date and squashed).

Thanks again!

@bboe
Copy link
Member

bboe commented Nov 24, 2014

  1. Try using a tuple. Json should convert it into a list, but tuples are immutable.

  2. I support we'll need to add a delete=False parameter to _request, and pass that through to force the underlying request to issue a DELETE, rather than a POST. If you're not sure the best way to add that, just let me know and I can throw it in there.

Thanks for all the great work.

@bboe
Copy link
Member

bboe commented Nov 24, 2014

When you get a chance, can you rename get_my_multis with get_my_multireddits? I decided I don't much like the short version. That can be it's own PR. Thanks!

@voussoir
Copy link
Contributor Author

Hmm, last night I also tried setting subreddits = tuple(subreddits) before putting it into data but I got the same error with "unhashable type 'dict' " instead. I guess I'll have to try a few more things since I know the source of the problem.

The delete parameter makes sense to me. I haven't done much with actual http requests since PRAW does everything for me, but I'm sure I'll figure that out.

I agree with the decision to rename.

@bboe
Copy link
Member

bboe commented Nov 24, 2014

I'll take a look later at your caching issue unless you figure it out. I'm
feel like one or more of the existing methods has resolved a similar issue.

On Mon, Nov 24, 2014 at 9:25 AM, voussoir notifications@github.com wrote:

Hmm, last night I also tried setting subreddits = tuple(subreddits)
before putting it into data but I got the same error with "unhashable
type 'dict' " instead. I guess I'll have to try a few more things since I
know the source of the problem.

The delete parameter makes sense to me. I haven't done much with actual
http requests since PRAW does everything for me, but I'm sure I'll figure
that out.

I agree with the decision to rename.


Reply to this email directly or view it on GitHub
#344 (comment).

@voussoir
Copy link
Contributor Author

This StackOverflow taught me something

Not every tuple is hashable. A tuple containing non-hashable items is not hashable:

Because the subreddits list is full of dicts, it doesn't matter whether subreddits is a list type or tuple type. This small snippet recreates the problem that I encounter in the cache:

>>> a=[{'name':'botwatch'}, {'name':'gold'}]
>>> a=tuple(a)
>>> b={}
>>> b[a]=5
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'

The reddit source code is definitely expecting a list full of dicts, so my options are

a) Give the cache a specialized version of data while sending the actual data to reddit

b) Disable caching for create_multireddit entirely, which no other function seems to do.

c) Raise an issue on their github and wait for change.

Create_multireddit is not the kind of operation that really needs caching, but it seems weird that such a core function of PRAW should change for this fringe method. Why do they want dicts of a single variable anyway?!

What are your thoughts? In the meantime, I'll try MR deletion.

@bboe
Copy link
Member

bboe commented Nov 25, 2014

Great observation with regard to the dictionary. Why not add this FrozenDict class to praw.helpers? Make sure to add the link in the function's comments. Using that class should obviate the problem. Thanks!

http://stackoverflow.com/a/2704866/176978

@voussoir
Copy link
Contributor Author

I'm really lost now :(

Create MR API

r.create_multireddit()

subreddits is a tuple full of FrozenDicts, so it's fully hashable and the cache is no longer complaining, that's good.
Reddit is now telling me 400 Bad Request, "cannot parse json data" despite my best attempts to match what the API page tells me and what Chrome appears to be doing when creating in-browser:

capture

I caught the error as a variable, and found that the request data didn't appear to carry the subreddit list anymore:

>>> a.request.body
'path=%2Fuser%2FGoldenSights%2Fm%2Ffruits&api_type=json&uh=•••••&visibility=private'

though perhaps it was removed for being empty I'm not sure

While taking shots in the dark, I also gave this a try because I don't know how closely it's supposed to resemble the API page:

subreddits = [FrozenDict(name=six.text_type(sub)) for sub in subreddits]
subreddits = tuple(subreddits)
path = "/user/%s/m/%s" % (self.user.name, name)
model = FrozenDict(
        subreddits=subreddits,
        visibility=visibility
        )
data = {
        'multipath':path,
        'model':model
        }
return self.request_json(self.config['multireddit_about']%(
    self.user.name, name), data=data)

I'm really out of ideas on what to try next.

I'm also having trouble deleting MRs but let's tackle one thing at a time I guess.

Edit: Realized that the FrozenDict was producing an incorrect hash. Give me a minute to investigate.

Edit: I fixed that part, but I'm still just stuck. The FrozenDict isn't behaving as expected with the other elements of praw. The keys and values aren't moving around correctly.

@bboe
Copy link
Member

bboe commented Nov 26, 2014

I'll take a peak in a bit.

@bboe
Copy link
Member

bboe commented Nov 26, 2014

I think this endpoint may take json data, which is going to be a bit of a pain to implement. Feel free to keep trying, but if my hunch is right then it's going to require some internal restructuring.

@voussoir
Copy link
Contributor Author

Feels like Multireddits have come from a different planet or something.

I know you filed that issue last week (which hasn't seen any response yet), but I feel like the API itself is partly at fault. How silly that they want a list of single-variable dicts in a format unlike anything else I can think of. I should ask them about it.

In case you were curious, my attempts at Deletion resulted in 403 Forbidden, "Please login to do that" errors, though I hadn't modified the headers or cookies in any way and the request was carrying my modhash. It got pretty disorienting figuring out the way requests are bounced around through praw, but I ended up inside internal.py wondering why POST methods pass and DELETE methods fail. I wouldn't worry about solving this immediately but if the issue is obvious it'd be nice to knock out.

@bboe
Copy link
Member

bboe commented Nov 27, 2014

Okay, I got multireddit creation working. I did some refactoring. Please pull in the commit I added from this branch:

https://github.com/praw-dev/praw/tree/multireddit

You should be able to get delete's working from here.

Happy Thanksgiving (if you're in the US). Thanks again for all the great work.

@voussoir
Copy link
Contributor Author

Wow, awesome! I wasn't expecting a turnaround that fast. I'll start cooking up the next PR.

Enjoy your Thanksgiving

Edit: I'm going to make subreddits default to an empty list.

Edit: Here are the features I will implement and test for the upcoming PR. Let me know if there's anything else you'd like to see

[ ] r.delete_multireddit(name)
[ ] Multireddit.delete()
[X] r.copy_multireddit(from, to)
[X] Multireddit.copy(to)
[X] r.rename_multireddit(from, to)
[X] Multireddit.rename(to)

The PR after that will consist of adding and dropping subreddits from multis. Then, the modification of Multireddit properties.

@voussoir
Copy link
Contributor Author

Can I have your opinion on the copy function? I'm not sure how I want to lay out the parameters to make it quick and friendly, and give it a useful docstring

Here's what I have now

def copy_multireddit(self, *from_, to=None):

This gives the following:

  • from_ could be a single MR object, or two strings representing redditor and multi without any hassle.
  • Since to defaults to None, you can copy someone else's multireddit without to and it will immediately use the original name
  • If from_ only contains a single string, we assume that it is your own multireddit, and therefore a to must be specified because it will need a different name from the original.
  • If the user forgets to explicitly set to="new_name", and from_ contains 3 data (or an MR object plus one string), we can assume that the final item was supposed to be the to.

I know organizing the parameters like this is uncommon (And potentially against Pep8?), but I think it gives more flexibility than the alternatives. Pretty much the only way to misuse this function is to copy your own multireddit without explicitly setting to because the two strings will look like from_ data.

Does this system assume / imply too much?

Edit: It's working great so far. I'm quite happy with this method.

Edit: Copy and Rename are complete, now I'm left with this whiny Delete method. Do you have any idea why I might get 403 Forbidden, "Please login to do that" despite the request carrying my cookie and modhash?

@bboe
Copy link
Member

bboe commented Nov 28, 2014

It should only be a function on existing Multis. I'm no longer in favor of
adding more top level functions.
On Nov 27, 2014 1:01 PM, "voussoir" notifications@github.com wrote:

Can I have your opinion on the copy function? I'm not sure how I want to
lay out the parameters to make it quick and friendly, and give it a useful
docstring

Here's what I have now

def copy_multireddit(self, *from_, to=None):

This gives the following:

from_ could be a single MR object, or two strings representing
redditor and multi without any hassle.

Since to defaults to None, you can copy someone else's multireddit
without to and it will immediately use the original name

If from_ only contains a single string, we assume that it is your own
multireddit, and therefore a to must be specified because it will need
a different name from the original.

If the user forgets to explicitly set to="new_name", and from_
contains 3 data (or an MR object plus one string), we can assume that the
final item was supposed to be the to.

I know organizing the parameters like this is uncommon (And potentially
against Pep8?), but I think it gives more flexibility than the
alternatives. Pretty much the only way to misuse this function is to copy
your own multireddit without explicitly setting to because the two
strings will look like from_ data.

Does this system assume / imply too much?


Reply to this email directly or view it on GitHub
#344 (comment).

@voussoir
Copy link
Contributor Author

Oh, that's unexpected, though I guess it does simplify things. This also means that copying, renaming, and deleting a multireddit will all require a minimum of 2 calls when it could be done in 1.

Are you sure that's the right move? What are the concerns regarding top-level functions? I feel it will make writing more cumbersome without benefit. Since the functions are already available from the Object side, removing the Toplevel side only gives users less options, which I don't think is helpful to anyone.

@bboe
Copy link
Member

bboe commented Nov 28, 2014

It won't actually require two calls because fetching a Multireddit is a lazy operation (or should be). Thus r.get_multireddit(...).delete will only require a single HTTP request and is much cleaner (API wise) then r.delete_multireddit(...).

@voussoir
Copy link
Contributor Author

In that case, it looks like Multireddits aren't responding to fetch=False correctly. Here's what it looked like when I added a print() statement to internal.py

error

Furthermore, r.get_multireddit().delete() is already available in the current system (or, will be once it stops 403'ing), and users who are more comfortable in that style may use it as much as they want. Removing the alternative only takes power away from others, myself very much included. Your method may require less string manipulation with the arguments, but the outgoing request is going to look exactly the same.

I still don't understand the concern / perceived benefits?

@bboe
Copy link
Member

bboe commented Nov 28, 2014

First get_multireddit should not actually fetch anything until an attribute that is not available is needed. That's consistent with the remainder of PRAW. We can fix that an test it later.

Regarding whether or not to add more UnauthenticatedReddit methods (via Mixin classes) could be a topic of debate. Should every API action have a corresponding top-level r.do_some_action(all, arguments, needed) component, or should the actions be associated with their models in an object oriented approach? I prefer the latter approach as it provides organization while keeping the code and documentation simple.

For the most part PRAW has currently supported both approaches. The reasoning for this is simply backwards compatibility with older versions of PRAW. If I had more time, I would remove all the unnecessary top-level functions in the PRAW3 release, however, I don't have that time. What I can do it prevent the addition new top-level functions, and that's what I intend to do.

@voussoir
Copy link
Contributor Author

As the Owner, I've got to respect your decision there, it wouldn't be right for me to force you into a debate about your own project.

However, for me to write this change would be to shoot myself in the foot. It may be worthwhile to get the opinions of other PRAW users and see if they consider the r.catch_all() to be PRAW's strong point or a hindrance.

@bboe
Copy link
Member

bboe commented Nov 28, 2014

I don't understand how making the addition shoots yourself in the foot. You can still accomplish exactly what you want to do; the syntax is just slightly different.

@voussoir
Copy link
Contributor Author

Users who prefer the r.get_multireddit().rename() form won't notice any change for better or worse, and users who prefer the toplevel will have something to groan about with nothing positive to show for it.

It's turning 1 click into 2. Like if sending a reddit PM permanently required you to go to their user page and click "send a message" instead of being able to type their name straight into the compose box. Or disabling the ability to use mkdir in the commandline because everyone should click the "new folder" button instead. It's a net-inconvenience and the end-result is exactly the same.

I've always considered the Redditor objects, etc, to be containers for information, and the ability to use Redditor.send_message() function is icing on the cake. Making r.get_redditor('bboe').send_message('hello', 'hi') the required syntax is too much icing. The lines are unecessarily lengthy and it's more clunky to read and write than r.send_message('bboe', 'hello', 'hi'), especially when I'm using PRAW from the commandline which is all the time.

My opinions on this kind of UX are generally more extreme than the average person's, which is why it may help to hear from other PRAW users.

Sorry for being a pain in the ass.

@bboe
Copy link
Member

bboe commented Nov 28, 2014

That's good feedback. I agree that r.get_redditor('bboe').send_message('hello', 'hi') is too long. If I were to break things in the future it would probably just be:

r.redditor('bboe').message('hello', 'hi')

I would argue that it's more common, however, to perform an action dynamically based on data returned from a PRAW request. Thus you'll often see:

sub.author.send_message('hello', 'hi')

which is slightly simpler than:

r.send_message(sub.author, 'hello', 'hi')

not to mention that the documentation could confuse people (and the code is more complicated) and they may think they need to call the function like:

r.send_message(sub.author.name, 'hello', 'hi')

Perhaps we can support a way to enable god mode (http://en.wikipedia.org/wiki/God_object) that will dynamically generate all the top-level functions for those who want them without cluttering the documentation with duplicate functionality.

Also the example you provide actually is a great reason why I prefer there to be only one way to do something because the way we use the same documentation strings for these two methods has resulted in a number of issues brought up on /r/redditdev.

@voussoir
Copy link
Contributor Author

r.redditor('bboe').message('hello', 'hi')

This is sacrificing clarity and readability for fitting characters on a line, especially since Redditor and Message are both objects. "Should I be using r.redditor() or praw.objects.Redditor()?"

The alternative to the line-length issue is:

bboe = r.get_redditor('bboe')
bboe.send_message(...)

which, again... 1 click into 2 here.


It's more common to perform an action dynamically
...
The way we use the same docstrings has resulted in a number of issues

I'd agree, and that's why I consider Redditor.send_message() to be that icing that gives you a shortcut through other methods like your example. This needs to be advertised in the docs and, if you'd like to make this shift, could become the advertised method in the docs, but not at the expensive of literally losing the other options.

Most people don't know the in-depth Windows hotkeys, but they're a godsend to those that do. They speed up the process if you are willing to learn how to use them, and don't cause any grief if you are fine with using a mouse. When learning how to use a computer, you'll be taught with a mouse, so PRAW can be taught with your OOP.


The idea of a godmode is right up my alley, but I think in the process of creating this, you'll end up with a worse mash of functions than already exists. The functions won't be able to change or simplify at all, because they'll need to support both string input and object input just as they currently do, except then they'll be off in a separate file as part of a class that isn't meant to be instantiated by hand and you have to get into statics and stuff (I don't know a lot about how this would work, so I can be wrong).

Sometimes I'll use commandline PRAW to do things that I could normally do in the browser, because the API provides all sorts of wonderful shorthand ways of doing things. The script shouldn't have artificial roadblocks to slow me down. r is the godmode, because it's like having all of Reddit in one place.

I realize this all sounds very melodramatic but I do not wish to rewrite it.

@voussoir
Copy link
Contributor Author

Alright I'm out of clues, deletion is still returning 403 "Please login to do that" despite carrying my cookie and modhash. By the time the Request object leaves internal._prepare_request(), the DELETE requests and POST requests looks the same as far as I can tell, but for some reason the DELETEs don't want to work. It's hard for me to see what's going on inside a POST because I can't catch them and inspect manually like I can with the 403. However, using print statements I have determined that the header and modhash of create_multireddit and delete_multireddit calls are identical.

Here is prepare_request, and here is delete_multireddit

Here are some screenshots relating my tales of woe. Chrome and PRAW are both hitting the same URL with roughly the same amount of information.

error

error2


As you can see in copy_multireddit and rename_multireddit, some multi operations have a special endpoint despite contrary help from the API page! I tried assuming the same for delete_multireddit, but this failed as well. Here's what that function looked like:

fkwargs = {'username': self.user.name, 'multi':name}
multipath = self.config.API_PATHS['multireddit'].format(**fkwargs)
multipath = '/' + multipath
data = {'multipath': multipath}
# multireddit_delete = "api/multi/delete/"
url = self.config['multireddit_delete']
print(url, data)
return self.request_json(url, data=data, delete=True)

This attempt goes against what I was seeing in Chrome, but the API page made me think I'm supposed to pass the multipath in data.

Interestingly, if you visit http://www.reddit.com/api/multi/delete, it actually has something there, unlike copy and rename. I'm not sure what to make of that.

What obvious hint am I missing here?


We can resume arguing about toplevels just as soon as this endpoint stops giving me a hard time.

@bboe bboe removed this from the praw3 milestone Apr 12, 2015
@bboe
Copy link
Member

bboe commented Apr 24, 2015

Any status on this issue? I've been trying to clean some stuff up for a PRAW3 release and wanted to see what your desire to get this in is.

@voussoir
Copy link
Contributor Author

I was having trouble understanding the JSON format that reddit wanted, but I've just created several multireddits and I think I have a much better handle on it overall. I'll definitely be able to put together a PR with at least some basic mr features.

@bboe
Copy link
Member

bboe commented Apr 25, 2015

Fantastic. Thanks for the efforts.

@bboe
Copy link
Member

bboe commented May 26, 2015

I'm going to consider this resolved. A few more tweaks and tests, otherwise it should be good to go. Thanks!

@bboe bboe closed this as completed May 26, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants