analitics

Pages

Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Monday, March 10, 2025

News : Ollama-Adaptive-Image-Code-Gen project test - need more memory.

Python code that leverages a language model (such as LLaMA) to generate images featuring basic shapes in 2D or 3D. The script randomly selects shapes, colors, and areas to create diverse visuals. It continuously generates images based on AI-generated code, validates them, and provides feedback for iterative improvements.
This source code can be found on this GitHub project named Ollama-Adaptive-Image-Code-Gen.
You can download this and use these commands to run and test this feature of create and generate images with ollama.
NOTE: Compose up process might upto 20 - 25 Mins. first time. Because it will download all the respective ModelFiles
Let's start:
git clone https://github.com/jaypatel15406/Ollama-Adaptive-Image-Code-Gen.git
Cloning into 'Ollama-Adaptive-Image-Code-Gen'...
Resolving deltas: 100% (30/30), done.

cd Ollama-Adaptive-Image-Code-Gen

Ollama-Adaptive-Image-Code-Gen>pip3 install -r requirements.txt
Collecting ollama (from -r requirements.txt (line 1))
...
Installing collected packages: propcache, multidict, frozenlist, aiohappyeyeballs, yarl, aiosignal, ollama, 
aiohttp
Successfully installed aiohappyeyeballs-2.4.8 aiohttp-3.11.13 aiosignal-1.3.2 frozenlist-1.5.0 multidict-6.1.0
ollama-0.4.7 propcache-0.3.0 yarl-1.18.3

Ollama-Adaptive-Image-Code-Gen>python main.py
 utility : pull_model_instance : Instansiating 'llama3.1' ...
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : pulling manifest
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : pulling 667b0c1932bc
 
Seams to work but I need more memory:
Ollama-Adaptive-Image-Code-Gen>python main.py
 utility : pull_model_instance : Instansiating 'llama3.1' ...
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : pulling manifest
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : pulling 667b0c1932bc
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : pulling 948af2743fc7
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : pulling 0ba8f0e314b4
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : pulling 56bb8bd477a5
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : pulling 455f34728c9b
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : verifying sha256 digest
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : writing manifest
 utility : pull_model_instance : 'llama3.1' Model Fetching Status : success

=========================================================================================

 utility : get_prompt_response : Prompt : Choose the dimension of the shape: '2D' or '3D'. NOTE: Return only the chosen dimension.
ERROR:root: utility : get_prompt_response : Error : model requires more system memory (5.5 GiB) than is available (5.1 GiB) (status code: 500) ... 

Saturday, March 1, 2025

Python 3.13.0rc1 : testing the elevenlabs with artificial intelligence.

Today I teste the elevenlabs python package to use it with artifical inteligence to create sound.
I install this python package with pip tool, I created a python script file and the basic script run well with the api key from the official website.
pip install elevenlabs
Collecting elevenlabs
...
Installing collected packages: websockets, sniffio, pydantic-core, h11, annotated-types, pydantic, httpcore, anyio, httpx,
elevenlabs
Successfully installed annotated-types-0.7.0 anyio-4.8.0 elevenlabs-1.52.0 h11-0.14.0 httpcore-1.0.7 httpx-0.28.1 
pydantic-2.10.6the official website pydantic-core-2.27.2 sniffio-1.3.1 websockets-15.0
...
pip install playsound
Collecting playsound
...
Installing collected packages: playsound
Successfully installed playsound-1.3.0
...
python elevenlabs_test_001.py
Fișierul audio a fost salvat la generated_audio.mp3
This is the source code:
import io  # Importarea bibliotecii io
from elevenlabs import ElevenLabs
from playsound import playsound
import tempfile
import os

# API Key pentru ElevenLabs
api_key = "API_KEY"
voice_id = "JBFqnCBsd6RMkjVDRZzb"


# Configurarea clientului ElevenLabs
client = ElevenLabs(api_key=api_key )

# Textul pe care vrei să-l convertești în audio
text = 'Hello! This is a test without mpv.'

# Generarea audio
audio_generator = client.generate(text=text, voice=voice_id)

# Colectarea datelor din generator într-un obiect BytesIO
audio_data = io.BytesIO()
for chunk in audio_generator:
    audio_data.write(chunk)
audio_data.seek(0)  # Resetarea pointerului la începutul streamului

# Specificarea căii de salvare pentru fișierul audio
save_path = 'generated_audio.mp3'

# Salvarea audio într-un fișier temporar
with tempfile.NamedTemporaryFile(delete=False, suffix='.mp3') as temp_audio:
    temp_audio.write(audio_data.read())
    temp_audio_path = temp_audio.name

# Redarea fișierului audio utilizând playsound
playsound(temp_audio_path)

# Salvarea fișierului audio generat într-o locație specificată
with open(save_path, 'wb') as f:
    audio_data.seek(0)  # Resetarea pointerului la începutul streamului pentru a citi din nou datele
    f.write(audio_data.read())

print(f'Fișierul audio a fost salvat la {save_path}')

Saturday, February 22, 2025

News : Python and Grok 3 Beta — The Age of Reasoning Agents

On the official website of x.ai you can find this:
We are thrilled to unveil an early preview of Grok 3, our most advanced model yet, blending superior reasoning with extensive pretraining knowledge.
You can find a simle and good example with python and pygame how this can be used.
The Grok 3 artificial inteligence is used for :
Research
Brainstorm
Analyze Data
Create images
Code
For me, the artificial intelligence help me to be more fast into coding versus issues and bugs, game design, parse and change data.
I don't test this Grok 3, but I can tell you some artificial inteligence into develop area are bad even they say is dedicated to this issue.

Saturday, February 8, 2025

Python 3.13.0rc1 : Testing python with Ollama local install.

I was very busy with development and testing for about two weeks and my laptop was stuck and I was working hard... Today I managed to test local background clipping on my laptop with a local Ollama installation separated by a Python module but with processing from the Python script. I also used Microsoft's Copilot artificial intelligence for python and it works well even though it is not theoretically specialized in development. The source code is quite large but the result is very good and fast:
import subprocess
import os
import json
from PIL import Image, ImageOps

class OllamaProcessor:
    def __init__(self, config_file):
        self.config_file = config_file
        self.model_methods = self.load_config()

    def load_config(self):
        try:
            with open(self.config_file, 'r') as file:
                config = json.load(file)
            print("Configuration loaded successfully.")
            return config
        except FileNotFoundError:
            print(f"Configuration file {self.config_file} not found.")
            raise
        except json.JSONDecodeError:
            print(f"Error decoding JSON from the configuration file {self.config_file}.")
            raise

    def check_ollama(self):
        try:
            result = subprocess.run(["ollama", "--version"], capture_output=True, text=True, check=True)
            print("Ollama is installed. Version:", result.stdout)
        except subprocess.CalledProcessError as e:
            print("Ollama is not installed or not found in PATH. Ensure it's installed and accessible.")
            raise
... 
Here is the result obtained after finishing running in the command line:
python ollama_test_001.py
Configuration file ollama_config.json created successfully.
Configuration loaded successfully.
Ollama is installed. Version: ollama version is 0.5.7

Available models: ['NAME']
pulling manifest
pulling 170370233dd5... 100% ▕██████████████▏ 4.1 GB
pulling 72d6f08a42f6... 100% ▕██████████████▏ 624 MB
pulling 43070e2d4e53... 100% ▕██████████████▏  11 KB
pulling c43332387573... 100% ▕██████████████▏   67 B
pulling ed11eda7790d... 100% ▕██████████████▏   30 B
pulling 7c658f9561e5... 100% ▕██████████████▏  564 B
verifying sha256 digest
writing manifest
success
Model llava pulled successfully for method process_images_in_folder.
Some "Command failed ..." but the result is cutting well and it has transparency !