Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to extract Key Phrases from PDF Containing Plain Text #69

Closed
Ahmed3435 opened this issue Jul 3, 2019 · 1 comment
Closed

Unable to extract Key Phrases from PDF Containing Plain Text #69

Ahmed3435 opened this issue Jul 3, 2019 · 1 comment

Comments

@Ahmed3435
Copy link

Hi Team
We have tried using the code, it is working fine when we index the PDF containing images in it, but when i tried indexing PDF containing images as well as text, only images are being extracted but not text. Our requirement is to search the keys from the PDF containing text and images.
please help

regards
Ahmed

@Careyjmac
Copy link
Collaborator

What you are seeing is expected with the JFK demo. It was built to work with scanned PDFs which are all images and have no native text (since that is what all the files in the JFK released document set are). What I can tell you is that the native text in your PDF is still being indexed and is therefore searchable, but the reason you don't see it in the UI is because we depend on being able to OCR the images in order to extract the layout information of the text to get the hOCR experience that the UI has.

It is still possible to use cognitive search/possibly even the JFK demo with your documents. You have two options that immediately come to my mind:

  1. If you really care about the hOCR experience, then you can make a small change to your indexer definition before running the JFKInitializer that will cause any PDF pages to be rendered to an image instead of smaller images being extracted. The change is to set the imageAction parameter to be generateNormalizedImagePerPage instead of just generateNormalizedImages. This allows us to be able to OCR that image and therefore get the layout info we need for the hOCR experience. Note that this only works with PDFs and not other document types, other document types will continue to have the images extracted as they are today. The other issue here is that obviously the OCR model isn't perfect and thus with this approach the text will likely be less accurate then using the actual native text. With this approach again we do still index the native text, so it will be part of the searches you make, but it won't appear in the JFK UI. It may also have slightly higher costs as we may end up generating and OCRing more images then we did with the original approach.
  2. If the hOCR experience isn't super important to you and you just want to explore the document set, take a look at the Azure Search Knowledge Mining Accelerator repo. There is a simpler template in there that does pretty much everything the JFK sample does just without the hOCR, and it displays the native text as part of the experience.

Hope that helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants