analitics

Pages

Showing posts with label urllib. Show all posts
Showing posts with label urllib. Show all posts

Monday, April 29, 2019

Python 3.7.3 : Get location of International Space Station.

Today I tested the urllib python module with python 3.7.3 and json python module.
The issue was to get the location of International Space Station - Open Notify.
The International Space Station is moving at close to 28,000 km/h so its location changes really fast! Where is it right now?
This is an open source project to provide a simple programming interface for some of NASA’s awesome data.
I do some of the work to take raw data and turn them into APIs related to space and spacecraft.
C:\Python373>python.exe
Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 64 bit (AMD6
4)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib.request
>>> with urllib.request.urlopen('http://api.open-notify.org/iss-now.json') as f:
        print(f.read(300))
...
b'{"iss_position": {"longitude": "-86.9247", "latitude": "-38.3744"}, "message":
 "success", "timestamp": 1556575039}'
>>> with urllib.request.urlopen('http://api.open-notify.org/iss-now.json') as f:
...     source = f.read()
...     data = json.loads(source)
...
>>> print(data)
{'iss_position': {'longitude': '151.1941', 'latitude': '49.4702'}, 'message': 's
uccess', 'timestamp': 1556578621}
>>> print(data['iss_position']['longitude'])
151.1941
>>> print(data['iss_position']['latitude'])
49.4702
>>> print(data['message'])
success

Sunday, December 16, 2018

Fix errors when write files.

The python is a very versatile programming language.
The tutorial for today is about:
  • check the type of variables;
  • see the list error of writelines with the list output;
  • fix errors for writelines;
One good example for some errors can be this:
>>> file.writelines(paragraphs)
Traceback (most recent call last):
  File "", line 1, in 
TypeError: a bytes-like object is required, not 'str'
>>> file.writelines(paragraphs.decode('utf-8'))
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: 'list' object has no attribute 'decode'
Is a common issue for list and writing file.
The result of paragraphs is a list, see:
>>> type(paragraphs)
The list can be write with te writelines into the file like this:
>>> file = open("out.txt","wb")
>>> file.writelines([word.encode('utf-8') for word in paragraphs])
The file is a open file variable and the paragraphs is a list.

Wednesday, February 23, 2011

Just a simple python weather script.

Sometimes we need simple solutions. An example is displaying data on a computer screen using conky. under Linux.
Another example is the display of data without using the browser.
Whether you use Windows or Linux python scripts come to help. Here's a simple example written in python that can display weather data.

import urllib
from xml.dom import minidom

wurl = 'http://xml.weather.yahoo.com/forecastrss?p=%s'
wser = 'http://xml.weather.yahoo.com/ns/rss/1.0'

def weather_for_zip(zip_code):
    url = wurl % zip_code +'&u=c'
    dom = minidom.parse(urllib.urlopen(url))
    forecasts = []
    for node in dom.getElementsByTagNameNS(wser, 'forecast'):
        forecasts.append({
            'date': node.getAttribute('date'),
            'low': node.getAttribute('low'),
            'high': node.getAttribute('high'),
            'condition': node.getAttribute('text')
        })
    ycondition = dom.getElementsByTagNameNS(wser, 'condition')[0]
    return {
        'current_condition': ycondition.getAttribute('text'),
        'current_temp': ycondition.getAttribute('temp'),
        'forecasts': forecasts ,
        'title': dom.getElementsByTagName('title')[0].firstChild.data
    }
def main():
    a=weather_for_zip("ROXX0003")
    print '=================================='
    print '|',a['title'],'|'
    print '=================================='
    print '|current condition=',a['current_condition']
    print '|current temp     =',a['current_temp']
    print '=================================='
    print '|  today     =',a['forecasts'][0]['date']
    print '|  hight     =',a['forecasts'][0]['high']
    print '|  low       =',a['forecasts'][0]['low']
    print '|  condition =',a['forecasts'][0]['condition']
    print '=================================='
    print '|  tomorrow  =',a['forecasts'][1]['date']
    print '|  hight     =',a['forecasts'][1]['high']
    print '|  low       =',a['forecasts'][1]['low']
    print '|  condition =',a['forecasts'][1]['condition']
    print '=================================='

main()
Here is the result of script execution:

>>> 
==================================
| Yahoo! Weather - Bucharest, RO |
==================================
|current condition= Light Snow
|current temp     = -3
==================================
|  today     = 23 Feb 2011
|  hight     = 0
|  low       = -5
|  condition = Light Snow
==================================
|  tomorrow  = 24 Feb 2011
|  hight     = 0
|  low       = -4
|  condition = Mostly Cloudy
==================================
>>> 

Thursday, February 4, 2010

Parsing feeds - part 1

From time to time I used conky. Is good for me, because i have all i need on my desktop.
How helped me python in this case?
For example i use one script to parse a feed from this url:
"http://www.bnro.ro/nbrfxrates.xml"
The example is simple to understand :
from xml.dom import minidom as dom
import urllib
def fetchPage(url):
a = urllib.urlopen(url)
return ''.join(a.readlines())

def extract(webpage):
a = dom.parseString(webpage)
item2 = a.getElementsByTagName('SendingDate')[0].firstChild.wholeText
print "DATA ",item2
item = a.getElementsByTagName('Cube')
for i in item:
if i.hasChildNodes() == True:
eur = i.getElementsByTagName('Rate')[10].firstChild.wholeText
dol = i.getElementsByTagName('Rate')[26].firstChild.wholeText
print "EURO  ",eur
print "DOLAR ",dol

if __name__=='__main__':
webpage = fetchPage("http://www.bnro.ro/nbrfxrates.xml")
extract(webpage)
The result is:
$python xmlparse.py
DATA  2010-02-04
EURO   4.1214
DOLAR  2.9749
With "urllib" package I read the url.
The result is parsing with functions from "dom" package.
I used this functions "parseString" and "getElementsByTagName".
More about this functions you will see on:
http://docs.python.org/library/xml.dom.minidom.html
This is all.