analitics

Pages

Showing posts with label module. Show all posts
Showing posts with label module. Show all posts

Sunday, November 30, 2025

News : the xonsh shell language and command prompt.

Xonsh is a modern, full-featured and cross-platform python shell. The language is a superset of Python 3.6+ with additional shell primitives that you are used to from Bash and IPython. It works on all major systems including Linux, OSX, and Windows. Xonsh is meant for the daily use of experts and novices.
The install is easy with pip tool:
python -m pip install 'xonsh[full]'

News : mitmproxy - part 001.

Mitmproxy is an interactive, open‑source proxy tool that lets you intercept, inspect, and modify HTTP and HTTPS traffic in real time. It acts as a "man‑in‑the‑middle" between your computer and the internet, making it possible to debug, test, or analyze how applications communicate online.
Why Python?
  • Mitmproxy is built in Python and exposes a powerful addon API
  • You can write custom scripts to automate tasks and traffic manipulation
  • Block or rewrite requests and responses with flexible logic
  • Inject headers or simulate server responses for testing
  • Integrate with other Python tools for advanced automation
  • Intercept and inspect HTTP and HTTPS traffic in real time
  • Modify requests and responses dynamically with Python scripts
  • Block specific hosts or URLs to prevent unwanted connections
  • Inject custom headers into outgoing requests for debugging or control
  • Rewrite response bodies (HTML, JSON, text) using regex or custom logic
  • Log and save traffic flows for later analysis and replay
  • Simulate server responses to test client behavior offline
  • Automate testing of web applications and APIs with scripted rules
  • Monitor performance metrics such as latency and payload size
  • Integrate with other Python tools for advanced automation and analysis
  • Use a trusted root certificate to decrypt and modify HTTPS traffic securely
Let's install:
pip install mitmproxy
Let's see the python script:
# addon.py
from mitmproxy import http
from mitmproxy import ctx
import re

BLOCKED_HOSTS = {
    "hyte.com",
    "ads.example.org",
}

REWRITE_RULES = [
    # Each rule: (pattern, replacement, content_type_substring)
    (re.compile(rb"Hello World"), b"Salut lume", "text/html"),
    (re.compile(rb"tracking", re.IGNORECASE), b"observare", "text"),
]

ADD_HEADERS = {
    "X-Debug-Proxy": "mitm",
    "X-George-Tool": "true",
}

class GeorgeProxy:
    def __init__(self):
        self.rewrite_count = 0

    def load(self, loader):
        ctx.log.info("GeorgeProxy addon loaded.")

    def request(self, flow: http.HTTPFlow):
        # Block specific hosts early
        host = flow.request.host
        if host in BLOCKED_HOSTS:
            flow.response = http.Response.make(
                403,
                b"Blocked by GeorgeProxy",
                {"Content-Type": "text/plain"}
            )
            ctx.log.warn(f"Blocked request to {host}")
            return

        # Add custom headers to outgoing requests
        for k, v in ADD_HEADERS.items():
            flow.request.headers[k] = v

        ctx.log.info(f"REQ {flow.request.method} {flow.request.url}")

    def response(self, flow: http.HTTPFlow):
        # Only process text-like contents
        ctype = flow.response.headers.get("Content-Type", "").lower()
        raw = flow.response.raw_content

        if raw and any(t in ctype for t in ["text", "html", "json"]):
            new_content = raw
            for pattern, repl, t in REWRITE_RULES:
                if t in ctype:
                    new_content, n = pattern.subn(repl, new_content)
                    self.rewrite_count += n

            if new_content != raw:
                flow.response.raw_content = new_content
                # Update Content-Length only if present
                if "Content-Length" in flow.response.headers:
                    flow.response.headers["Content-Length"] = str(len(new_content))
                ctx.log.info(f"Rewrote content ({ctype}); total matches: {self.rewrite_count}")

        ctx.log.info(f"RESP {flow.response.status_code} {flow.request.url}")

addons = [GeorgeProxy()]
Let's run it:
mitmdump -s addon.py
[21:46:04.435] Loading script addon.py
[21:46:04.504] GeorgeProxy addon loaded.
[21:46:04.506] HTTP(S) proxy listening at *:8080.
[21:46:18.547][127.0.0.1:52128] client connect
[21:46:18.593] REQ GET http://httpbin.org/get
[21:46:18.768][127.0.0.1:52128] server connect httpbin.org:80 (52.44.182.178:80)
[21:46:18.910] RESP 200 http://httpbin.org/get
127.0.0.1:52128: GET http://httpbin.org/get
              << 200 OK 353b
[21:46:19.019][127.0.0.1:52128] client disconnect
[21:46:19.021][127.0.0.1:52128] server disconnect httpbin.org:80 (52.44.182.178:80)
Let's see the result:
curl -x http://127.0.0.1:8080 http://httpbin.org/get
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "Proxy-Connection": "Keep-Alive",
    "User-Agent": "curl/8.13.0",
    "X-Amzn-Trace-Id": "Root=1-692c9f0b-7eaf43e61f276ee62b089933",
    "X-Debug-Proxy": "mitm",
    "X-George-Tool": "true"
  },
  "origin": "84.117.220.94",
  "url": "http://httpbin.org/get"
}
This means
The request successfully went through mitmproxy running on 127.0.0.1:8080. Your addon worked: it injected the custom headers (X-Debug-Proxy, X-George-Tool). The httpbin.org echoed back the request details, showing exactly what the server received.

Python Tornado - part 001.

Python Tornado is a high‑performance web framework and asynchronous networking library designed for extreme scalability and real‑time applications. Its standout capability is handling tens of thousands of simultaneous connections efficiently, thanks to non‑blocking I/O.
This is an open source project actively maintained and available on tornadoweb.org.

Python Tornado – Key Capabilities

  • Massive Concurrency: Tornado can scale to tens of thousands of open connections without requiring huge numbers of threads.
  • Non‑blocking I/O: Its asynchronous design makes it ideal for apps that need to stay responsive under heavy load.
  • WebSockets Support: Built‑in support for WebSockets enables real‑time communication between clients and servers.
  • Long‑lived Connections: Perfect for long polling, streaming, or chat applications where connections remain open for extended periods.
  • Coroutines & Async/Await: Tornado integrates tightly with Python’s asyncio, allowing developers to write clean asynchronous code using coroutines.
  • Versatile Use Cases: Beyond web apps, Tornado can act as an HTTP client/server, handle background tasks, or integrate with other services.
Tornado setup: The script creates a web server using the Tornado framework, listening on port 8888.
Route definition: A single route /form is registered, handled by the FormHandler class.
GET request: When you visit http://localhost:8888/form, the server responds with an HTML page (form.html) that contains a simple input form.
POST request: When the form is submitted, the post() method retrieves the value of the name field using self.get_argument("name").
Response: The server then sends back a personalized message
Let's see the script:
import tornado.ioloop
import tornado.web
import os

class FormHandler(tornado.web.RequestHandler):
    def get(self):
        self.render("form.html")  # Render an HTML form

    def post(self):
        name = self.get_argument("name")
        self.write(f"Hello, {name}!")

def make_app():
    return tornado.web.Application([
        (r"/form", FormHandler),
    ],
    template_path=os.path.join(os.path.dirname(__file__), "templates")  # <-- aici
    )

if __name__ == "__main__":
    app = make_app()
    app.listen(8888)
    print("Server pornit pe http://localhost:8888/form")
    tornado.ioloop.IOLoop.current().start()

Thursday, October 23, 2025

News : OneByteRadar - pymem python package.

Today, I found this GitHub project about how to write to exe files with python programming language
The author say:
CS:GO radar hack achieved by patching one byte of game memory. Written in Python 3.
I don't test it, but good to know how can use these python modules.
The pymem Python library that allows you to interact with the memory of running Windows processes.
  • Reading and writing memory values of other processes
  • Scanning memory for byte patterns
  • Allocating and freeing memory inside another process
  • Accessing modules (DLLs) loaded by a process
  • This is often used in game hacking, automation, or debugging tools.
NOTE: I used artificial intelligence to write this simple example, it is more productive in programs with more complex syntax, but the basics of programming must be known...
Let's see one example with edge browser open:
import pymem
import pymem.process
import re

# Open the Edge process (make sure it's running)
pm = pymem.Pymem("msedge.exe")

# Get the main module of the process
module = pymem.process.module_from_name(pm.process_handle, "msedge.exe")
base = module.lpBaseOfDll
size = module.SizeOfImage

# Read the module's memory
data = pm.read_bytes(base, size)

# Search for a test pattern (generic example)
pattern = re.search(rb'\x48\x89\x5C\x24\x08\x57\x48\x83', data)

if pattern:
    address = base + pattern.start()
    print(f"Pattern found at: {hex(address)}")

    # Read 8 bytes from the found address
    raw = pm.read_bytes(address, 8)
    print("Raw bytes:", raw)

    # Interpret the bytes as a little-endian integer
    value = int.from_bytes(raw, byteorder='little')
    print("Integer value:", value)

    # Write a new value (e.g., 12345678)
    new_value = (12345678).to_bytes(8, byteorder='little')
    pm.write_bytes(address, new_value, 8)
    print("Value overwritten with 12345678.")

else:
    print("Pattern not found.")

# Close the process handle
pm.close_process()
What is that patern:
pattern = re.search(rb'\x48\x89\x5C\x24\x08\x57\x48\x83', data)
These bytes correspond to x86-64 assembly instructions. For example:

48 89 5C 24 08 → mov [rsp+8], rbx
57 → push rdi
48 83 → the start of an instruction like add/sub/cmp with a 64-bit operand
This sequence is typical for the prologue of a function in compiled C++ code — saving registers to the stack.
This is a simple example, you don't see anything in edge because is just one search and one overwritten:
python pymem_exemple_002.py
Pattern found at: 0x7ff7180cb3d5
Raw bytes: b'H\x89\\$\x08WH\x83'
Integer value: 9459906709773125960
Value overwritten with 12345678.
You replaced those 8 bytes with the integer 12345678, encoded as: 4E 61 BC 00 00 00 00 00 (in hex)
This corrupts the original instruction, which may crash Edge or cause undefined behavior.
Not crash, maybe my edge browser use protections (ASLR, DEP, CFG) that can make modification unstable.
--------
1. ASLR (Address Space Layout Randomization)
2. DEP (Data Execution Prevention)
3. CFG (Control Flow Guard)

Saturday, October 18, 2025

Blender 3D : ... simple clothing addon.

Today, I created a simple clothing addon with a two-mesh coat. The addon adds everything needed for the simulation including material types for the clothes.

Wednesday, September 24, 2025

Python 3.10.7 : Krita and python - part 002.

A simple source code to export PNG file for Godot game engine as Texture2D .
from krita import *
from PyQt5.QtWidgets import QAction, QMessageBox
import os

class ExportGodotPNG(Extension):
    def __init__(self, parent):
        super().__init__(parent)

    def setup(self):
        pass

    def export_png(self):
        # Get the active document
        doc = Krita.instance().activeDocument()
        if not doc:
            QMessageBox.warning(None, "Error", "No document open! Please open a document and try again.")
            return

        # Create an InfoObject for PNG export
        info = InfoObject()
        info.setProperty("alpha", True)  # Keep alpha channel for transparency
        info.setProperty("compression", 0)  # No compression for maximum quality
        info.setProperty("interlaced", False)  # Disable interlacing
        info.setProperty("forceSRGB", True)  # Force sRGB for Godot compatibility

        # Build the output file path
        if doc.fileName():
            base_path = os.path.splitext(doc.fileName())[0]
        else:
            base_path = os.path.join(os.path.expanduser("~"), "export_godot")
        output_file = base_path + "_godot.png"

        # Export the document as PNG
        try:
            doc.exportImage(output_file, info)
            # Show success message with brief usage info
            QMessageBox.information(None, "Success", 
                f"Successfully exported as PNG for Godot: {output_file}\n\n"
                "This PNG has no compression, alpha channel support, and sRGB for Godot compatibility. "
                "To use in Godot, import the PNG and adjust texture settings as needed."
            )
        except Exception as e:
            QMessageBox.critical(None, "Error", f"Export failed: {str(e)}")

    def createActions(self, window):
        # Create only the export action in Tools > Scripts
        action_export = window.createAction("export_godot_png", "Export Godot PNG", "tools/scripts")
        action_export.triggered.connect(self.export_png)

# Register the plugin
Krita.instance().addExtension(ExportGodotPNG(Krita.instance()))

Thursday, September 18, 2025

Blender 3D : ... addon add all materials from folder.

This is a simple addon for Blender 3D version 3.6.x version.
The addon search recursive all blend files from one folder and take all materials.
Let's see the script:
bl_info = {
    "name": "Append Materials from Folder",
    "author": "Grok",
    "version": (1, 2),
    "blender": (3, 0, 0),
    "location": "View3D > Sidebar > Append Materials",
    "description": "Select a folder and append all materials from .blend files recursively",
    "category": "Import-Export",
}

import bpy
import os
from bpy.types import Operator, Panel, PropertyGroup
from bpy.props import StringProperty, PointerProperty

class AppendMaterialsProperties(PropertyGroup):
    folder_path: StringProperty(
        name="Folder Path",
        description="Path to the folder containing .blend files",
        default="",
        maxlen=1024,
        subtype='DIR_PATH'
    )

class APPEND_OT_materials_from_folder(Operator):
    bl_idname = "append.materials_from_folder"
    bl_label = "Append Materials from Folder"
    bl_options = {'REGISTER', 'UNDO'}
    bl_description = "Append all materials from .blend files in the selected folder and subfolders"

    def execute(self, context):
        props = context.scene.append_materials_props
        folder_path = props.folder_path

        # Normalize path to avoid issues with slashes
        folder_path = os.path.normpath(bpy.path.abspath(folder_path))
        if not folder_path or not os.path.isdir(folder_path):
            self.report({'ERROR'}, f"Invalid or no folder selected: {folder_path}")
            return {'CANCELLED'}

        self.report({'INFO'}, f"Scanning folder: {folder_path}")
        blend_files_found = 0
        materials_appended = 0
        errors = []

        # Walk recursively through the folder
        for root, dirs, files in os.walk(folder_path):
            self.report({'INFO'}, f"Checking folder: {root}")
            for file in files:
                if file.lower().endswith('.blend'):
                    blend_files_found += 1
                    blend_path = os.path.join(root, file)
                    self.report({'INFO'}, f"Found .blend file: {blend_path}")
                    try:
                        # Open the .blend file to inspect materials
                        with bpy.data.libraries.load(blend_path, link=False) as (data_from, data_to):
                            if data_from.materials:
                                data_to.materials = data_from.materials
                                materials_appended += len(data_from.materials)
                                self.report({'INFO'}, f"Appended {len(data_from.materials)} materials from: {blend_path}")
                            else:
                                self.report({'WARNING'}, f"No materials found in: {blend_path}")
                    except Exception as e:
                        errors.append(f"Failed to process {blend_path}: {str(e)}")
                        self.report({'WARNING'}, f"Error in {blend_path}: {str(e)}")

        # Final report
        if blend_files_found == 0:
            self.report({'WARNING'}, f"No .blend files found in {folder_path} or its subfolders!")
        else:
            self.report({'INFO'}, f"Found {blend_files_found} .blend files, appended {materials_appended} materials.")
        if errors:
            self.report({'WARNING'}, f"Encountered {len(errors)} errors: {'; '.join(errors)}")

        return {'FINISHED'}

class VIEW3D_PT_append_materials(Panel):
    bl_space_type = 'VIEW_3D'
    bl_region_type = 'UI'
    bl_category = "Append Materials"
    bl_label = "Append Materials from Folder"

    def draw(self, context):
        layout = self.layout
        props = context.scene.append_materials_props
        layout.prop(props, "folder_path")
        layout.operator("append.materials_from_folder", text="Append Materials")

def register():
    bpy.utils.register_class(AppendMaterialsProperties)
    bpy.utils.register_class(APPEND_OT_materials_from_folder)
    bpy.utils.register_class(VIEW3D_PT_append_materials)
    bpy.types.Scene.append_materials_props = PointerProperty(type=AppendMaterialsProperties)

def unregister():
    bpy.utils.unregister_class(VIEW3D_PT_append_materials)
    bpy.utils.unregister_class(APPEND_OT_materials_from_folder)
    bpy.utils.unregister_class(AppendMaterialsProperties)
    del bpy.types.Scene.append_materials_props

if __name__ == "__main__":
    register()

Monday, September 8, 2025

Python 3.13.0 : Script for python modules then installs them - updated with fix.

This script scans a folder full of .py files Python scripts, identifies all the external modules they import, filters out built-in ones, writes the installable ones to a requirements.txt file, and then installs them using pip—in parallel threads for speed.
I use the copilot and some comments are into my language, but I tested and works well:
NOTE: I updated with detection python modules based "from" and another issue: check if python module is instaled and step over that python module ...
This script will try to install many python modules, I can update to be better with these issues:
...some modules are default , some scripts are from another area, see Blender 3D with bpy python modules, some packages comes with same modules, this can be soleved with defined lists with unique items.
import subprocess
import sys
import os
import shutil
import importlib.util
import re
import concurrent.futures
from typing import List, Tuple, Set

class ModuleManager:
    def __init__(self):
        self.modules: Set[str] = set()
        self.pip_path = self._get_pip_path()

    def _get_pip_path(self) -> str:
        possible_path = os.path.join(sys.exec_prefix, "Scripts", "pip.exe")
        return shutil.which("pip") or (possible_path if os.path.exists(possible_path) else None)

    def extract_imports_from_file(self, file_path: str) -> List[Tuple[str, str]]:
        imports = []
        try:
            with open(file_path, 'r', encoding='utf-8') as file:
                for line in file:
                    # Detect 'import module'
                    import_match = re.match(r'^\s*import\s+([a-zA-Z0-9_]+)(\s+as\s+.*)?$', line)
                    if import_match:
                        module = import_match.group(1)
                        imports.append((module, line.strip()))
                        continue
                    
                    # Detect 'from module import ...'
                    from_match = re.match(r'^\s*from\s+([a-zA-Z0-9_]+)\s+import\s+.*$', line)
                    if from_match:
                        module = from_match.group(1)
                        imports.append((module, line.strip()))
        except FileNotFoundError:
            print(f"❌ Fișierul {file_path} nu a fost găsit.")
        except Exception as e:
            print(f"❌ Eroare la citirea fișierului {file_path}: {e}")
        return imports

    def scan_directory_for_py_files(self, directory: str = '.') -> List[str]:
        py_files = []
        for root, _, files in os.walk(directory):
            for file in files:
                if file.endswith('.py'):
                    py_files.append(os.path.join(root, file))
        return py_files

    def collect_unique_modules(self, directory: str = '.') -> None:
        py_files = self.scan_directory_for_py_files(directory)
        all_imports = []
        with concurrent.futures.ThreadPoolExecutor() as executor:
            future_to_file = {executor.submit(self.extract_imports_from_file, file_path): file_path for file_path in py_files}
            for future in concurrent.futures.as_completed(future_to_file):
                imports = future.result()
                all_imports.extend(imports)
        
        for module, _ in all_imports:
            self.modules.add(module)

    def is_module_installed(self, module: str) -> bool:
        return importlib.util.find_spec(module) is not None

    def run_pip_install(self, module: str) -> bool:
        if not self.pip_path:
            print(f"❌ Nu am găsit pip pentru {module}.")
            return False
        try:
            subprocess.check_call([self.pip_path, "install", module])
            print(f"✅ Pachetul {module} a fost instalat cu succes.")
            return True
        except subprocess.CalledProcessError as e:
            print(f"❌ Eroare la instalarea pachetului {module}: {e}")
            return False

    def check_and_install_modules(self) -> None:
        def process_module(module):
            print(f"\n🔎 Verific dacă {module} este instalat...")
            if self.is_module_installed(module):
                print(f"✅ {module} este deja instalat.")
            else:
                print(f"📦 Instalez {module}...")
                self.run_pip_install(module)
                # Re-verifică după instalare
                if self.is_module_installed(module):
                    print(f"✅ {module} funcționează acum.")
                else:
                    print(f"❌ {module} nu funcționează după instalare.")

        with concurrent.futures.ThreadPoolExecutor() as executor:
            executor.map(process_module, self.modules)

def main():
    print("🔍 Verific pip...")
    manager = ModuleManager()
    if manager.pip_path:
        print(f"✅ Pip este disponibil la: {manager.pip_path}")
    else:
        print("⚠️ Pip nu este disponibil.")
        return

    directory = sys.argv[1] if len(sys.argv) > 1 else '.'
    print(f"\n📜 Scanez directorul {directory} pentru fișiere .py...")
    manager.collect_unique_modules(directory)
    
    if not manager.modules:
        print("⚠️ Nu s-au găsit module în importuri.")
        return
    
    print(f"\nModule unice detectate: {', '.join(manager.modules)}")
    manager.check_and_install_modules()

if __name__ == "__main__":
    main()

Saturday, August 30, 2025

Python 3.13.0 : Predicted XAU/USD with torch.

Testing the torch python package
import torch
import torch.nn as nn
import numpy as np

data = np.array([
    [1800.5, 1810.0, 1795.0, 1000, 1805.2],
    [1805.2, 1815.0, 1800.0, 1200, 1812.8],
    [1812.8, 1820.0, 1808.0, 1100, 1810.5],
    [1810.5, 1818.0, 1805.0, 1300, 1825.0],
    [1825.0, 1830.0, 1815.0, 1400, 1820.3],
    [1820.3, 1828.0, 1810.0, 1250, 1835.7]
])

X, y = torch.tensor(data[:, :4], dtype=torch.float32), torch.tensor(data[:, 4], dtype=torch.float32)
model = nn.Sequential(nn.Linear(4, 6), nn.ReLU(), nn.Linear(6, 4), nn.ReLU(), nn.Linear(4, 1))
optimizer = torch.optim.Adam(model.parameters())
loss_fn = nn.MSELoss()
for _ in range(3000):
    optimizer.zero_grad()
    y_pred = model(X).squeeze()
    loss = loss_fn(y_pred, y)
    loss.backward()
    optimizer.step()
prediction = model(torch.tensor([[1830.0, 1840.0, 1825.0, 1150]], dtype=torch.float32))
print("Predicted XAU/USD closing price:", round(prediction.item(), 2))
The result is :
python torch_001.py
Predicted XAU/USD closing price: 1819.57

Python 3.13.0 : Predicted XAU/USD with tensorflow.

This is the source code :
import tensorflow as tf
import numpy as np

data = np.array([
    [1800.5, 1810.0, 1795.0, 1000, 1805.2],
    [1805.2, 1815.0, 1800.0, 1200, 1812.8],
    [1812.8, 1820.0, 1808.0, 1100, 1810.5],
    [1810.5, 1818.0, 1805.0, 1300, 1825.0],
    [1825.0, 1830.0, 1815.0, 1400, 1820.3],
    [1820.3, 1828.0, 1810.0, 1250, 1835.7]
])

X, y = data[:, :4], data[:, 4]
model = tf.keras.Sequential([
    tf.keras.layers.Dense(6, activation='relu', input_shape=(4,)),
    tf.keras.layers.Dense(4, activation='relu'),
    tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse')
model.fit(X, y, epochs=3000, verbose=0)
prediction = model.predict(np.array([[1830.0, 1840.0, 1825.0, 1150]]))
print("Predicted XAU/USD closing price:", round(prediction[0][0], 2))
The result is :
python tf_001.py
2025-08-30 21:11:13.966066: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
C:\Python313\Lib\site-packages\google\protobuf\runtime_version.py:98: UserWarning: Protobuf gencode version 5.28.3 is exactly one major version older than the runtime version 6.31.1 at tensorflow/core/framework/attr_value.proto. Please update the gencode to avoid compatibility violations in the next runtime release.
...
Predicted XAU/USD closing price: 2.9

Wednesday, August 27, 2025

Python 3.13.0 : Predicted XAU/USD with MLPRegressor.

Testing the MLPRegressor from sklearn python package:
from sklearn.neural_network import MLPRegressor
import numpy as np

data = np.array([
    [1800.5, 1810.0, 1795.0, 1000, 1805.2],
    [1805.2, 1815.0, 1800.0, 1200, 1812.8],
    [1812.8, 1820.0, 1808.0, 1100, 1810.5],
    [1810.5, 1818.0, 1805.0, 1300, 1825.0],
    [1825.0, 1830.0, 1815.0, 1400, 1820.3],
    [1820.3, 1828.0, 1810.0, 1250, 1835.7]
])

X, y = data[:, :4], data[:, 4]
model = MLPRegressor(hidden_layer_sizes=(6, 4), max_iter=3000)
model.fit(X, y)

prediction = model.predict([[1830.0, 1840.0, 1825.0, 1150]])
print("Predicted XAU/USD closing price:", round(prediction[0], 2))
The answer is: Predicted XAU/USD closing price: 1836.68

Sunday, August 17, 2025

News : Django Hijack

Log in and work on behalf of other users without having to know their credentials.
With Django Hijack, admins can impersonate and work on behalf of other users without having to know their credentials.
Comes with two versions available: v2.x and v3.x. The last v3.x provides a security first design, easy integration, customisation, out-of-the-box Django admin support and dark-mode. It is a complete rewrite and all former APIs are broken.

Saturday, July 5, 2025

Python 3.13.5 : the groq tested with basic example !

About groq you can find on the official Grok website .
Today I tested the groq A.P.I. with the basic example :
import os

from groq import Groq

client = Groq(
    api_key="gsk_..."
)

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "explain what groq is",
        }
    ],
    model="llama-3.3-70b-versatile",
)

print(chat_completion.choices[0].message.content)
The result of : "explain what groq is" is that :
Groq is a relatively new and innovative technology company that focuses on developing high-performance, artificial intelligence (AI) computing hardware and software solutions. Founded in 2016 by Jonathan Ross, a former Google engineer, Groq aims to provide faster, more efficient, and scalable AI computing capabilities to various industries. Groq's primary product is a tensor processing unit (TPU) chip designed specifically for machine learning (ML) and deep learning (DL) workloads. This TPU is optimized for high-performance, low-latency, and low-power consumption, making it suitable for applications that require rapid processing of vast amounts of data, such as: 1. **Computer Vision**: Image and video recognition, object detection, and tracking. 2. **Natural Language Processing (NLP)**: Text analysis, sentiment analysis, and language translation. 3. **Speech Recognition**: Voice command processing and speech-to-text applications. Groq's innovative architecture and design philosophy are centered around the following key aspects: 1. **High-Bandwidth Memory**: Groq's TPU features high-bandwidth memory, which enables fast data transfer and processing. 2. **Scalable Architecture**: The TPU is designed to scale horizontally, allowing for easy expansion to meet growing computational demands. 3. **Low-Latency**: Groq's architecture minimizes latency, ensuring fast response times and real-time processing capabilities. 4. **Software-Defined**: The TPU is software-defined, allowing for flexibility and customization to support a wide range of AI applications and frameworks. Groq's technology has the potential to accelerate AI adoption in various industries, including: 1. **Autonomous Vehicles**: Enhanced computer vision and sensor processing for safer and more efficient autonomous driving. 2. **Healthcare**: Faster medical image analysis and diagnosis, as well as improved personalized medicine and treatment planning. 3. **Financial Services**: Enhanced risk analysis, portfolio optimization, and fraud detection using AI-powered systems. While Groq is still a relatively new company, its innovative approach to AI computing has garnered significant attention and interest from the tech industry, investors, and potential customers. As AI continues to transform various aspects of our lives, Groq's technology is poised to play a significant role in shaping the future of artificial intelligence and machine learning.

Tuesday, July 1, 2025

Python 3.13.5 : use the jupyterlab, notebook and voila - part 001.

JupyterLab is the latest web-based interactive development environment for notebooks, code, and data. Its flexible interface allows users to configure and arrange workflows in data science, scientific computing, computational journalism, and machine learning. A modular design invites extensions to expand and enrich functionality.
Let's test with these python modules: jupyterlab, notebook and voila.
First, I install with pip tool the jupyterlab python module:
pip install jupyterlab
Collecting jupyterlab
...
Successfully installed anyio-4.9.0 argon2-cffi-25.1.0 argon2-cffi-bindings-21.2.0 arrow-1.3.0 asttokens-3.0.0 async-lru-2.0.5
attrs-25.3.0 babel-2.17.0 bleach-6.2.0 cffi-1.17.1 comm-0.2.2 debugpy-1.8.14 defusedxml-0.7.1 executing-2.2.0 
fastjsonschema-2.21.1 fqdn-1.5.1 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 ipykernel-6.29.5 ipython-9.4.0 
ipython-pygments-lexers-1.1.1 isoduration-20.11.0 jedi-0.19.2 json5-0.12.0 jsonpointer-3.0.0 jsonschema-4.24.0 
jsonschema-specifications-2025.4.1 jupyter-client-8.6.3 jupyter-core-5.8.1 jupyter-events-0.12.0 jupyter-lsp-2.2.5 
jupyter-server-2.16.0 jupyter-server-terminals-0.5.3 jupyterlab-4.4.4 jupyterlab-pygments-0.3.0 jupyterlab-server-2.27.3 
matplotlib-inline-0.1.7 mistune-3.1.3 nbclient-0.10.2 nbconvert-7.16.6 nbformat-5.10.4 nest-asyncio-1.6.0 notebook-shim-0.2.4
overrides-7.7.0 pandocfilters-1.5.1 parso-0.8.4 platformdirs-4.3.8 prometheus-client-0.22.1 prompt_toolkit-3.0.51 
psutil-7.0.0 pure-eval-0.2.3 pycparser-2.22 python-json-logger-3.3.0 pywin32-310 pywinpty-2.0.15 pyyaml-6.0.2 pyzmq-27.0.0
referencing-0.36.2 rfc3339-validator-0.1.4 rfc3986-validator-0.1.1 rpds-py-0.26.0 send2trash-1.8.3 sniffio-1.3.1 
stack_data-0.6.3 terminado-0.18.1 tinycss2-1.4.0 tornado-6.5.1 traitlets-5.14.3 types-python-dateutil-2.9.0.20250516 
uri-template-1.3.0 wcwidth-0.2.13 webcolors-24.11.1 webencodings-0.5.1 websocket-client-1.8.0
Next, the install of the notebook python module
pip install notebook
Collecting notebook
...
Installing collected packages: notebook
Successfully installed notebook-7.4.4
This python package named voila will help us to use online graphic user interfaces:
pip install voila
Collecting voila
...
Successfully installed voila-0.5.8 websockets-15.0.1
The voila package need to work with ipywidgets python package:
pip install ipywidgets
Collecting ipywidgets
...
Successfully installed ipywidgets-8.1.7 jupyterlab_widgets-3.0.15 widgetsnbextension-4.0.14
Let's start the jupiter tool with this command:
jupyter notebook
I used an default python example with a slider:
import ipywidgets as widgets
from IPython.display import display

slider = widgets.IntSlider(value=5, min=0, max=10)
display(slider)
This command will start the web with the slider using the voila command:
voila test.ipynb
The result is this:

Friday, June 27, 2025

Python 3.13.5 : the manim python module - part 001.

I used this package few days ago and now I wrote about how can used it.
Update the new release of pip is available: 25.0.1 -> 25.1.1 , run this command:
python.exe -m pip install --upgrade pip
The documentation can be found on the official website.
pip install manim
Collecting manim
  Downloading manim-0.19.0-py3-none-any.whl.metadata (11 kB)
...
Successfully installed Pillow-11.2.1 Pygments-2.19.1 audioop-lts-0.2.1 av-13.1.0 beautifulsoup4-4.13.4 click-8.2.1 cloup-3.0.7 colorama-0.4.6 decorator-5.2.1 glcontext-3.0.0 isosurfaces-0.1.2 manim-0.19.0 manimpango-0.6.0 mapbox-earcut-1.0.3 markdown-it-py-3.0.0 mdurl-0.1.2 moderngl-5.12.0 moderngl-window-3.1.1 networkx-3.5 numpy-2.3.0 pycairo-1.28.0 pydub-0.25.1 pyglet-2.1.6 pyglm-2.8.2 rich-14.0.0 scipy-1.15.3 screeninfo-0.8.1 skia-pathops-0.8.0.post2 soupsieve-2.7 srt-3.5.3 svgelements-1.9.6 tqdm-4.67.1 typing-extensions-4.14.0 watchdog-6.0.0
Let's see how can be used to see the help area:
python.exe -m manim render --help
Manim Community v0.19.0

Usage: python -m manim render ...
Let's use this source code:
from manim import *

class AdvancedAnimation(Scene):
    def construct(self):
        # Scene 1: Introduction
        title = Text("Advanced Animation with Manim").scale(0.76)
        self.play(FadeIn(title))
        self.wait(2)

        # Scene 2: Custom Animation
        circle = Circle().set_fill(color=BLUE, opacity=0.5)
        square = Square().set_fill(color=RED, opacity=0.5)
        self.add(circle, square)
        self.play(
            Rotate(circle, angle=TAU),
            Rotate(square, angle=-TAU),
            run_time=2,
            rate_func=linear
        )
        self.wait()

        # Scene 3: Text Animation
        text = Text("This is a custom text animation", font_size=40).to_edge(UP)
        self.play(Write(text), run_time=2)
        self.wait()

        # Scene 4: Shapes Manipulation
        triangle = Triangle().shift(RIGHT * 2)
        self.play(GrowFromCenter(triangle), run_time=1.5)
        self.wait()

        # Scene 5: Transition to next scene
        self.play(Uncreate(triangle), FadeOut(text))

        # Scene 6: Final Animation
        final_text = Text("This is the end of our animation", font_size=50).to_edge(DOWN)
        self.play(FadeIn(final_text), run_time=1.5)
        self.wait(2)

# Run the animation
AdvancedAnimation()
Use this command to render:
python.exe -m manim render manim_test_001.py AdvancedAnimation -p
AdvancedAnimation -p
Manim Community v0.19.0

[06/27/25 19:52:43] INFO     Animation 0 : Partial movie file      scene_file_writer.py:588
                             written in
                             'D:\PythonProjects\manim_projects\med
                             ia\videos\manim_test_001\1080p60\part
                             ial_movie_files\AdvancedAnimation\397
                             7891868_355746014_223132457.mp4'
...
[06/27/25 19:53:56] INFO     Previewed File at:                             file_ops.py:237
                             'D:\PythonProjects\manim_projects\media\videos
                             \manim_test_001\1080p60\AdvancedAnimation.mp4'
The result comes with many files, see this 1080p60 video result:

Tuesday, June 24, 2025

Python 3.13.5 : Get bookmarks from Edge browser with python.

Today I tested with these python modules json and pathlib.
This python script will get all bookmarks from Edge browser:
import json
from pathlib import Path

bookmark_path = Path.home() / "AppData/Local/Microsoft/Edge/User Data/Default/Bookmarks"

with open(bookmark_path, "r", encoding="utf-8") as f:
    data = json.load(f)

# Exemplu: listăm toate titlurile bookmark-urilor
def extract_bookmarks(bookmark_node):
    bookmarks = []
    if "children" in bookmark_node:
        for child in bookmark_node["children"]:
            bookmarks.extend(extract_bookmarks(child))
    elif bookmark_node.get("type") == "url":
        bookmarks.append((bookmark_node["name"], bookmark_node["url"]))
    return bookmarks

all_bookmarks = extract_bookmarks(data["roots"]["bookmark_bar"])
for name, url in all_bookmarks:
    print(f"{name}: {url}")

Friday, June 20, 2025

Python 3.13.5 : testing with flask, request and playwright python module.

Today some testing with these python modules: flask, request and playwright.
I used pip to install flask python package:
pip install flask
Collecting flask
  Downloading flask-3.1.1-py3-none-any.whl.metadata (3.0 kB)
...
Installing collected packages: markupsafe, itsdangerous, blinker, werkzeug, jinja2, flask
Successfully installed blinker-1.9.0 flask-3.1.1 itsdangerous-2.2.0 jinja2-3.1.6 markupsafe-3.0.2 werkzeug-3.1.3
pip install requests
...
Installing collected packages: urllib3, idna, charset_normalizer, certifi, requests
Successfully installed certifi-2025.6.15 charset_normalizer-3.4.2 idna-3.10 requests-2.32.4 urllib3-2.5.0
pip install playwright
Collecting playwright
...
Installing collected packages: pyee, greenlet, playwright
Successfully installed greenlet-3.2.3 playwright-1.52.0 pyee-13.0.0
...
playwright install
Downloading Chromium 136.0.7103.25 ...
This will download a lot fo files and the will install the playwright tool.
First script is simple one will try to get cloudflare header on default ip:
from flask import Flask, request

app = Flask(__name__)

@app.route("/")
def detecteaza_ipuri():
    ip_client = request.headers.get('CF-Connecting-IP', 'Necunoscut')
    ip_cloudflare = request.remote_addr
    return (
        f"IP real vizitator: {ip_client}
" f"IP Cloudflare (vizibil de server): {ip_cloudflare}" ) if __name__ == "__main__": app.run(debug=True)
The next one will check more ...
from flask import Flask, request
import requests
import ipaddress

app = Flask(__name__)

def este_ip_cloudflare(ip):
    try:
        raspuns = requests.get("https://www.cloudflare.com/ips-v4")
        raspuns.raise_for_status()
        subneturi = raspuns.text.splitlines()

        for subnet in subneturi:
            if ipaddress.ip_address(ip) in ipaddress.ip_network(subnet):
                return True
        return False
    except Exception as e:
        return f"Eroare la verificarea IP-ului Cloudflare: {e}"

@app.route("/")
def detecteaza_ipuri():
    ip_client = request.headers.get('CF-Connecting-IP', 'Necunoscut')
    ip_cloudflare = request.remote_addr
    rezultat = este_ip_cloudflare(ip_cloudflare)

    return (
        f"IP real vizitator: {ip_client}
" f"IP Cloudflare (forwarder): {ip_cloudflare}
" f"Este IP-ul din rețeaua Cloudflare? {'DA' if rezultat == True else 'NU' if rezultat == False else rezultat}" ) if __name__ == "__main__": app.run(host="0.0.0.0", port=5000)
Now, the script with the request python module:
import requests

url = "https://cobalt.tools"
headers = {
    "User-Agent": "Mozilla/5.0",  # Simulează un browser real
}

try:
    r = requests.get(url, headers=headers, timeout=10)
    content = r.text.lower()

    print(f"Cod răspuns HTTP: {r.status_code}")

    if "cloudflare" in content or "cf-ray" in content or "attention required" in content:
        print("Cloudflare a intermediat cererea sau a blocat-o cu o pagină specială.")
    else:
        print("Cererea a fost servită normal.")
except Exception as e:
    print(f"Eroare la conexiune: {e}")
The last one will use the playwright python module:
import sys
import re
from urllib.parse import urlparse
from pathlib import Path
from playwright.sync_api import sync_playwright

def converteste_url_in_nume_fisier(url):
    parsed = urlparse(url)
    host = parsed.netloc.replace('.', '_')
    path = parsed.path.strip('/').replace('/', '_')
    if not path:
        path = 'index'
    return f"{host}_{path}.txt"

if len(sys.argv) != 2:
    print("Utilizare: python script.py https://exemplu.com")
    sys.exit(1)

url = sys.argv[1]
fisier_output = converteste_url_in_nume_fisier(url)

with sync_playwright() as p:
    browser = p.chromium.launch(headless=True)
    pagina = browser.new_page()
    pagina.goto(url, wait_until='networkidle')
    continut = pagina.content()
    Path(fisier_output).write_text(continut, encoding='utf-8')
    browser.close()

print(f"Conținutul a fost salvat în: {fisier_output}")
This will create a file with the source code of web page.

Thursday, June 19, 2025

News : UV - fast Python package and project manager.

An extremely fast Python package and project manager, written in Rust.
cd uv_projects

uv_projects>uv init hello-world
Initialized project `hello-world` at `D:\PythonProjects\uv_projects\hello-world`

uv_projects>cd hello-world

uv_projects\hello-world>uv run main.py
Using CPython 3.13.5 interpreter at: C:\Python3135\python.exe
Creating virtual environment at: .venv
Hello from hello-world!

Saturday, February 8, 2025

Python 3.13.0rc1 : Testing python with Ollama local install.

I was very busy with development and testing for about two weeks and my laptop was stuck and I was working hard... Today I managed to test local background clipping on my laptop with a local Ollama installation separated by a Python module but with processing from the Python script. I also used Microsoft's Copilot artificial intelligence for python and it works well even though it is not theoretically specialized in development. The source code is quite large but the result is very good and fast:
import subprocess
import os
import json
from PIL import Image, ImageOps

class OllamaProcessor:
    def __init__(self, config_file):
        self.config_file = config_file
        self.model_methods = self.load_config()

    def load_config(self):
        try:
            with open(self.config_file, 'r') as file:
                config = json.load(file)
            print("Configuration loaded successfully.")
            return config
        except FileNotFoundError:
            print(f"Configuration file {self.config_file} not found.")
            raise
        except json.JSONDecodeError:
            print(f"Error decoding JSON from the configuration file {self.config_file}.")
            raise

    def check_ollama(self):
        try:
            result = subprocess.run(["ollama", "--version"], capture_output=True, text=True, check=True)
            print("Ollama is installed. Version:", result.stdout)
        except subprocess.CalledProcessError as e:
            print("Ollama is not installed or not found in PATH. Ensure it's installed and accessible.")
            raise
... 
Here is the result obtained after finishing running in the command line:
python ollama_test_001.py
Configuration file ollama_config.json created successfully.
Configuration loaded successfully.
Ollama is installed. Version: ollama version is 0.5.7

Available models: ['NAME']
pulling manifest
pulling 170370233dd5... 100% ▕██████████████▏ 4.1 GB
pulling 72d6f08a42f6... 100% ▕██████████████▏ 624 MB
pulling 43070e2d4e53... 100% ▕██████████████▏  11 KB
pulling c43332387573... 100% ▕██████████████▏   67 B
pulling ed11eda7790d... 100% ▕██████████████▏   30 B
pulling 7c658f9561e5... 100% ▕██████████████▏  564 B
verifying sha256 digest
writing manifest
success
Model llava pulled successfully for method process_images_in_folder.
Some "Command failed ..." but the result is cutting well and it has transparency !