Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not all text matched with link nested in spans #541

Open
eht16 opened this issue Apr 1, 2023 · 1 comment
Open

Not all text matched with link nested in spans #541

eht16 opened this issue Apr 1, 2023 · 1 comment
Labels
future things to consider later

Comments

@eht16
Copy link

eht16 commented Apr 1, 2023

With some HTML like:

<span class="h-card">
 <a class="u-url mention" href="http://localhost/" rel="nofollow noopener noreferrer" target="_blank">
  @
  <span>
   matched text
  </span>
 </a>
</span>
Some text which is missing.

only the text of the a tag is matched but not the text beyong (the HTML snippet corresponds to the returned content field in the Mastodon API, https://docs.joinmastodon.org/methods/statuses/#get).

I'm not sure if this is desired behavior or a bug.
For comparison I tried BeautifulSoup and html2text and both consider the additional text and return it.

>>> import bs4, html2text, requests_html
>>> html = "<span class=\"h-card\"><a class=\"u-url mention\" href=\"http://localhost/\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">@<span>matched text</span></a></span> Some text which is missing."
>>> print(bs4.BeautifulSoup(html, features="html.parser").get_text('\n'))
@
matched text
 Some text which is missing.
>>> print(html2text.html2text(html))
[@matched text](http://localhost/) Some text which is missing.


>>> print(requests_html.HTML(html=html).text)
@matched text
>>> 

P.S.: great to see this package is maintained again! ❤️

@surister
Copy link
Contributor

surister commented Apr 2, 2023

It wouldn't be the first time that I see this, for example the Scala HTML/XML parser implementation used by Spark in Azure has this behavior. Talked to Microsoft about it and it is intended behavior.

Parsing HTML is hard since pretty much almost any string is 'valid' HTML, anyway, without going too deep into the pyquery/lxml implementation I wouldn't be surprised if it was intended.

@surister surister added the future things to consider later label Apr 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
future things to consider later
Projects
None yet
Development

No branches or pull requests

2 participants