[Plugin Update] WolframAlpha

I have updated the plugin, the changes are as follows:

  • changed the format the data is returned in to JSON.
  • made it possible to have way more control over results
  • there is now modes of search - text, image & sound eg. text will provide answers in text, image will provide answers with images of the object, graphs, maps simple snippets of dictionary type extracts and so on, sound will provide url’s to listen to the requested sound. with the new changes you can not only select returned pods but you can select anything within a pod or even let your app know the amount of results or assumptions made.
  • added a quick question mode - the answer will be a to the point no extra’s reply.
  • added quick question mode formatted for SPOKEN REPLY’S! - the returned answer will be formatted as if its spoken to you.
  • added an element! - text reader, the element will allow typed text to be read with a button press… BUT its power comes when you drop the width and height to 0px and use the new workflow action inside “element actions” to have it read a text… say maybe the result of a quick question formatted as a SPOKEN REPLY…

I will be making the speech element editable shortly and also be adding a new plugin devoted to text to speech also.

This update relates to this post - [New Plugin] WolframAlpha



For some reason your Text to Speech part of the plugin doesn’t work on mobile apps? Is this a bug or can WolframAlpha not actually support mobile apps?


Wolfram have the an option to have their info returned formatted as if it was spoken, so I added a simple voice synthesis element as an out of the box method to speak the result. Being that it uses a browsers Voice Synthesis API it will not work on some browsers, its not a mobile issue, its browser specific. If you need an all round option you may want to look at an API that takes text and returns an audio stream/file so your not relying on a specific browser feature.

ok, thanks for the info