analitics

Pages

Sunday, February 10, 2019

Using the asciimatics and pyfiglet python modules

This is a simple example how to use the asciimatics and pyfiglet python modules with python version 3.6.4.
First you need to install with the pip tool.
The source code is simple and start with the import it.
The Fire, Print and Screen is used to show the fire effect and print text with Figlet and FigletText.
Because the fire and text use the console application I used the default Screen Buffer Size set to 80.
The Screen.wrapper(my_work_web) show all effects from the my_work_web.
In this area is created variables for font type: banner_font and web_font.
The main reason I named the web_font was to show my web page but the size of the over the screensize.
I tested most of the fonts from pyfiglet python module but I cannot find one to show a web page link.
This is the source code I tested:
# -*- coding: utf-8 -*-
"""
@author: catafest
"""

from asciimatics.renderers import FigletText, Fire
from asciimatics.scene import Scene
from asciimatics.screen import Screen
from asciimatics.effects import Print
from asciimatics.exceptions import ResizeScreenError
from pyfiglet import Figlet
import sys

def my_work_web(screen):
    banner_font = "banner3"
    web_font = "block"
    scenes = []
    effects = [
        Print(screen,
              Fire(screen.height, 80, "*" * 70, 0.8, 60, screen.colours,
                   bg=screen.colours >= 256),
              0,
              speed=1,
              transparent=False),
        Print(screen,
              FigletText("Follow ", banner_font),
              (screen.height - 4) // 2,
              colour=Screen.COLOUR_BLUE,
              speed=1,
              stop_frame=30),
        Print(screen,
              FigletText("me", banner_font),
              (screen.height - 4) // 2,
              colour=Screen.COLOUR_BLUE,
              speed=1,
              start_frame=30,
              stop_frame=50),
        Print(screen,
              FigletText("on web", banner_font),
              (screen.height - 4) // 2,
              colour=Screen.COLOUR_BLUE,
              speed=1,
              start_frame=50,
              stop_frame=70),
        Print(screen,
              FigletText("catafest", banner_font),
              (screen.height - 4) // 2,
              colour=Screen.COLOUR_BLUE,
              speed=1,
              start_frame=70),
    ]
    scenes.append(Scene(effects, 100))

    text = Figlet(font=web_font, width=300).renderText("bye!")
    width = max([len(x) for x in text.split("\n")])

    effects = [
        Print(screen,
              Fire(screen.height, 80, "*" * 70, 0.8, 60, screen.colours),
              0,
              speed=1,
              transparent=False),

        Print(screen,
              FigletText("bye!", web_font),
              (screen.height - 2)  // 2,
              colour=Screen.COLOUR_WHITE,
              bg=Screen.COLOUR_BLUE,
              speed=1)
    ]
    scenes.append(Scene(effects, -1))
    screen.play(scenes, stop_on_resize=True)


if __name__ == "__main__":
    while True:
        try:
            Screen.wrapper(my_work_web)
            sys.exit(0)
        except ResizeScreenError:
            pass
The result of this source code is this:

Monday, January 28, 2019

Testing imageio python module.

This python module comes with this intro from pypi website:
Imageio is a Python library that provides an easy interface to read and write a wide range of image data, including animated images, volumetric data, and scientific formats. It is cross-platform, runs on Python 2.7 and 3.4+, and is easy to install.
Let's install this python module:
C:\>cd C:\Python364
C:\Python364>cd Scripts
C:\Python364\Scripts>pip3.6.exe install imageio
Collecting imageio
...
Successfully built imageio
Installing collected packages: imageio
Successfully installed imageio-2.4.1
You are using pip version 18.0, however version 18.1 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' comm
and.
I tested with a simple read medical data (DICOM - CT-MONO2-16-brain), see here.
The Digital Imaging and Communications in Medicine (DICOM) standard start with the basic idea is that patient and machine-readable information is embedded within a file (usually an image) as it’s created or converted.
This is a simple example without security.
Without encrypted connections between applications, anyone on the network could intercept the DICOM files and extract the patient information.
The Imageio provides a range of example images:
  1. Read an image;
  2. Iterate over frames in a movie;
  3. Grab screenshot or image from the clipboard;
  4. Convert a movie;
  5. Writing videos with FFMPEG and vaapi;
All file formats (93 files type ) can be read by this python module, see webpage here.
The examples from the official webpage work well.
Only the example with the DICOM file cannot be tested.
The main reason: I try to find a DICOM file but I don't find one.

Saturday, January 26, 2019

Testing the webpy python module.

Today I wrote about another python module named web.py.
The reasons I start this tutorial come from google page of SDK for App Engine.
The Google come with these options of the following frameworks can be used with Python programming language:
  • Flask;
  • Django;
  • Pyramid;
  • Bottle;
  • web.py
  • Tornado
I started in the past to learn and use Django and I tested with Flask and Bottle and today is web.py python module.
First, about this python module I can tell you is a simple web framework and comes with a web.py slogan:
Think about the ideal way to write a web app. Write the code to make it happen.
C:\Python364\Scripts>pip install web.py==0.40-dev1
Collecting web.py==0.40-dev1
  Downloading https://files.pythonhosted.org/packages/db/a5/8dfacc190908f9876632
69a92efa682175c377e3f7eab84ed0a89c963b47/web.py-0.40.dev1.tar.gz (117kB)
    100% |████████████████████████████████| 122kB 936kB/s
Building wheels for collected packages: web.py
  Building wheel for web.py (setup.py) ... done
  Stored in directory: C:\Users\catafest\AppData\Local\pip\Cache\wheels\1b\15\12
\4fd91f5ed7e3c8aae085050cce83f72b7ca4f463bf3e67d2b7
Successfully built web.py
Installing collected packages: web.py
Successfully installed web.py-0.40.dev1
Let's test the example from the official website:
C:\Python364>python.exe
Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)]
 on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import web
>>>
... urls = (
...     '/(.*)', 'hello'
... )
>>> app = web.application(urls, globals())
>>>
>>> class hello:
...     def GET(self, name):
...         if not name:
...             name = 'World'
...         return 'Hello, ' + name + '!'
...
>>> if __name__ == "__main__":
...     app.run()
...
http://0.0.0.0:8080/
127.0.0.1:50542 - - [27/Jan/2019 07:30:28] "HTTP/1.1 GET /" - 200 OK
127.0.0.1:50542 - - [27/Jan/2019 07:30:28] "HTTP/1.1 GET /favicon.ico" - 200 OK
The server starts at 0.0.0.0 (invalid address) and can see the result at 127.0.0.1:8080.

Tuesday, January 1, 2019

Detect nudity with nudepy python module.

Today I tested another python module named nudepy.
You can find it here.
This python module is a port of nude.js to Python.
Let's start the tutorial with the installation:
C:\Python364\Scripts>cd ..

C:\Python364>cd Scripts

C:\Python364\Scripts>pip install nudepy
Requirement already satisfied: nudepy in c:\python364\lib\site-packages (0.4)
Requirement already satisfied: pillow in c:\python364\lib\site-packages (from nu
depy) (5.3.0)
To test this python module, I used four images with the idea of a nude image
This image is the result of all images of the test.

This image files are named:
  • test_nude_001.jpg
  • test_nude_002.jpg
  • test_nude_003.jpg
  • test_nude_004.jpg
Let's see the script:
# for select jpeg files
import os, fnmatch
# import nude python module
import nude
from nude import Nude
#
nude_jpegs=fnmatch.filter(os.listdir('.'), '*nude*.jpg')
print(nude_jpegs)
for found_file in nude_jpegs:
    print (found_file)
    print("Nude file: ",nude.is_nude(str(found_file)))
    n = Nude(str(found_file))
    n.parse()
    print("and test result: ", n.result, n.inspect())
    print("====================")
The result of the output script is this:
C:\Python364>python.exe test_nude.py
['test_nude_001.jpg', 'test_nude_002.jpg', 'test_nude_003.jpg', 'test_nude_004.j
pg']
test_nude_001.jpg
Nude file:  False
and test result:  False #
====================
test_nude_002.jpg
Nude file:  False
and test result:  False #
====================
test_nude_003.jpg
Nude file:  False
and test result:  False #
====================
test_nude_004.jpg
Nude file:  True
and test result:  True #
====================

Thursday, December 27, 2018

Using LibROSA python module.

This python module named LibROSA is a python package for music and audio analysis and provides the building blocks necessary to create music information retrieval systems.
C:\Python364>cd Scripts
C:\Python364\Scripts>pip install librosa
Collecting librosa
...
Successfully installed audioread-2.1.6 joblib-0.13.0 librosa-0.6.2 llvmlite-0.26.0 numba-0.41.0 resampy-0.2.1 
scikit-learn-0.20.2
Let's create one waveform and a spectrogram with this python module.
The waveform (for sound) the term describes a depiction of the pattern of sound pressure variation (or amplitude) in the time domain.
A spectrogram (known also like sonographs, voiceprints, or voicegrams) is a visual representation of the spectrum of frequencies of sound or other signals as they vary with time.
I used a free WAV file sound from here.
The result of the waveform and spectrogram for that audio file is shown into next screenshots:


My example show first the waveform and you need to close the it to see the spectrogram.
Let's see the source code of this example:
import librosa
import librosa.display
import matplotlib.pyplot as plt
plt.figure(figsize=(14, 5))
path = "merry_christmas.wav"
out,samples = librosa.load(path)
print(out.shape, samples)
librosa.display.waveplot(out, sr=samples)
plt.show()
stft_array = librosa.stft(out)
stft_array_db = librosa.amplitude_to_db(abs(stft_array))
librosa.display.specshow(stft_array_db,sr=samples,x_axis='time', y_axis='hz')
plt.colorbar()
plt.show()