When crawling the web google reads and abide to robot.txt file.
This robot.txt file may optionnally contain a line :
Sitemap : http://yourwebsite.com/sitemap.xml (or gz extension ...)
This tells google about the sitemap, so that, after reading robot.txt, google will also read sitemap.xml, and then Google will (try to) index the pages listed in siremap.xml.
When I access the Q2A page <myWebsite>/sitemap.xml, I trigger the Q2A "sitemap plugin", which currently replies to my browser with some xml (the filemap properly formatted in xml). I can read this xml in my browser. I then have to copy / paste to create a sitemap.xml file that i will transfer in the root folder of my website.
Could the plugin save the generated sitemap xml data in a file sitemap.xml and place it in the web root directory (instead or in addition to) ?
Also, instead of having to get to my browser and request the url <myWebsite>/sitemap.xml, it would be fine to get a script that would do the same (generate and place the file in the root web directory), I could trigger this script with a cron -- my web is hosted by ovh.com and i get a cron service along with my hosting (which is the cheapest one).