It depends on the data you are getting. If it is data that the websites you are targetting readily share then it will likely be accessible through some kind of data feed that they provide. If its not readily available you may need to 'scrape' it as Gary mentions.
Once you have the data, depending on how you got it, you may need to 'map' it into your database. Basically, if you scraped it you or whoever did the scraping may have been able to get each data set from each site in the same way and so it will already be arranged in a table in the same way. If you have got it from a feed, a .csv or .xml file for example you may need to write a script to 'map' each data column to fit the arrangement of your database.
Specific methods would depend on what type of data you're getting. What kind of database you want to use and what processes you would like to do with the data. In my experience I have used PHP to get xml, csv, txt feeds, and aggregate them into MySQL databases but have not done much 'scraping'.
Have a look at scraperwiki.com - It is the Mekanikal Turk for Scraping websites.