It is possible to detect the embeded text (link type) in one image or portion of text when the user taps that object? There are some external types of media we want to open, like a UIWebView to run some resources based in html5, a .doc/.xls reader or some sound and video.
It’s there a way to open 3D objects, like an external 3d player?
When the user interacts with the document using “freehand” or “sticky notes” what is the best way of store the information generated by that interaction. We want to have it available in multiple devices for the same user.
- Yes, this is certainly possible. Links are a type of annotation, and typically have an “action” associated with them. The possible types of action are:
When a user taps on a link, you can check its action type, and continue as appropriate for the particular action. For example, in libTools.a, we change our behaviour between e_GoTo, which is an internal document link that causes a scroll to occur (like when you tap on a table of contents entry), and e_URI, which causes the app to open the URI using Safari. With the source code to libTools,a, it would just be a matter of checking for other action types and taking the appropriate action, such as overlaying a UIWebView, playing a sound, etc.
In this case you are only limited by the capabilities of iOS. We provide access to the 3D data, and you can then display it within the app (using custom code or a 3rd party library), or send it to another app.
When a user draws or adds a sticky note, you can save the PDF document and develop a way of syncing it between devices. If you want to reduce the size of the data you are transferring, perhaps you could develop a system for serializing annotations, and send only that information to the other devies which could be used to update their copy of the PDF.