Hosting Price Comparison

4GB RAM VPS: Cheapest Options (Early 2026) Key Takeaways for the Cheapest Tiers: 1. Hetzner (CX23) — €3.49 You were right; this is the current price leader for 4GB RAM. The Price: €3.49 per month (billed hourly at €0.0056/hr). IPv4 Note: This price is for an IPv6-only server. If you need a public IPv4 address, Hetzner adds a small monthly fee (usually around €0.50–€1.00), bringing the total closer to €4.10–€4.50. Location: This price typically applies to their European data centers (Germany/Finland). US locations (Virginia/Oregon) may be slightly higher due to regional costs. Strict on verification 2. IONOS (VPS M) — $4.00 If you are in the US or prefer USD billing without currency conversion: ...

February 15, 2026 · 2 min · Aleksei Aksenov

Monitoring Setup with Grafana and Prometheus

This is the definitive “Remote Monitoring” guide. You will have two separate machines communicating with each other. The Architecture Server A (Target): Runs your apps + lightweight “Exporters” (agents) that expose metrics. Server B (Monitoring): Runs Prometheus (database) & Grafana (dashboard). It reaches out to Server A to “scrape” data. Step 1: Prepare the Target Server (Your App Server) Do this on the server running your actual applications. Create a folder: mkdir -p ~/monitoring-agent ...

February 15, 2026 · 3 min · Aleksei Aksenov

Running Claude Code with a Different Model via LiteLLM Proxy

Claude Code is Anthropic’s official CLI tool for interacting with Claude models directly from your terminal. By default, it connects to Anthropic’s API, but sometimes you may want to use a different LLM provider — for cost savings, privacy, or to experiment with alternative models. In this guide, I’ll show how to redirect Claude Code requests through a LiteLLM proxy, allowing you to swap in any compatible model while keeping the Claude Code interface you’re familiar with. ...

February 14, 2026 · 5 min · Aleksei Aksenov
Install Claude Code Without Internet

Install Claude Code Without Internet

Why would you need this? The standard Claude Code installer runs curl ... | bash and downloads the binary on the fly. That works great — unless your Mac can’t reach the internet. Corporate networks with strict firewalls, air-gapped environments, or simply a flaky hotel Wi-Fi can all get in the way. The fix is simple: download the binary on a machine that does have internet, transfer it, and run the built-in installer. ...

February 13, 2026 · 5 min · Aleksei Aksenov
VPN 3X-UI VLESS+Reality Setup Guide

VPN on VPS: 3X-UI + VLESS+Reality in One Command

TLDR Deploy a ready-to-use VLESS+Reality VPN on any VPS in under 5 minutes: curl -sL https://raw.githubusercontent.com/fresh-fx59/threeiks-juai-forest/main/setup.sh | sudo bash One command — everything is configured automatically. You get a VLESS URI ready to paste into your VPN app. No manual panel setup needed. What You Get VLESS+Reality is the most advanced proxy protocol available today. It works by mimicking a TLS handshake to a real website (e.g., dl.google.com). Deep packet inspection (DPI) systems see what looks like normal HTTPS traffic to Google. If they probe the server, it actually proxies to Google — only authenticated clients get through the VPN tunnel. ...

February 13, 2026 · 6 min · Aleksei Aksenov
Saving Data for Disaster Recovery with Amazon S3 Glacier Deep Archive

Saving Data for Disaster Recovery with Amazon S3 Glacier Deep Archive

TL;DR Create an AWS account, S3 bucket in a cheap region, and a dedicated IAM user with minimal S3 related permissions Install & configure AWS CLI on the machine that will do the upload Stage 1 — Archive: Run glacier_archive_split.sh to compress source folders into ~100GB .tar.gz chunks across two transit disks (uses pigz for fast parallel compression) Stage 2 — Upload: Run glacier_upload.sh to upload archives to S3 Glacier Deep Archive via resumable multipart upload (100MB parts, crash-safe) Cost: ~$1/month per TB stored; uploads are free; full retrieval of 1TB costs ~$96 (mostly data transfer out) Safety: original data is never touched, every upload is verified, resume survives power outages (loses at most ~100MB) This post has CLAUDE.md file so you can update setup according to your needs via Claude Code. ...

February 11, 2026 · 11 min · Aleksei Aksenov
Transcribe Audio to Text Locally

Transcribe Audio to Text Locally

Setup Whisper on MacOS locally git clone https://github.com/ggerganov/whisper.cpp cd whisper.cpp bash ./models/download-ggml-model.sh large-v3 python -m venv whisper-env source whisper-env/bin/activate pip3 install ane_transformers openai-whisper coremltools ./models/generate-coreml-model.sh large-v3 brew install cmake WHISPER_COREML=1 make -j nano ~/.zshrc source ~/.zshrc Add the function below to ~/.zshrc function transcribe_ru() { if [ -z "$1" ]; then echo "Usage: transcribe_ru <audio_file> [model]" echo "Default model: large-v3-turbo" return 1 fi local input_file="$1" local filename=$(basename "$input_file") local stem="${filename%.*}" local ext="${filename##*.}" local model="${2:-large-v3}" local whisper_path="$HOME/Documents/projects/ai-whisper/whisper.cpp" local whisper_bin="$whisper_path/build/bin/whisper-cli" # Check if model exists if [ ! -f "$whisper_path/models/ggml-${model}.bin" ]; then echo "Error: Model 'ggml-${model}.bin' not found in $whisper_path/models/" return 1 fi # Only convert if not WAV local audio_file="$input_file" if [ "$ext" != "wav" ] && [ "$ext" != "WAV" ]; then echo "Converting ${ext} to 16kHz mono WAV..." audio_file="/tmp/${stem}_16k.wav" ffmpeg -y -i "$input_file" -ar 16000 -ac 1 -c:a pcm_s16le "$audio_file" > /dev/null 2>&1 if [ ! -f "$audio_file" ]; then echo "Error: Audio conversion failed." return 1 fi else echo "Using existing WAV file (assuming 16kHz mono)..." fi echo "Transcribing (Language: Russian, Model: $model, VAD: ON)..." "$whisper_bin" \ -m "$whisper_path/models/ggml-${model}.bin" \ -f "$audio_file" \ -l ru \ -otxt \ -of "${stem}" echo "✅ Done! Output: ${stem}.txt" # Cleanup temp file if created if [ "$audio_file" != "$input_file" ]; then rm "$audio_file" fi } If you need specific version of Xcode look for it on https://xcodereleases.com/ ...

December 26, 2025 · 2 min · Aleksei Aksenov
ИИтоги 2025

ИИтоги 2025

Видео запись конференции. https://manus.im нормально работает для презентации Google AI Studio https://notebooklm.google.com/ - можно делать очень крутые аудиопересказы на основе разных входных данных. Классные презентации, видеопрезентации. nano banana может использовать до 14-ти референсных изображаений. Делает инфографику. freepeak и weavy взаимодейстие с визуальными нейронками для генерации разных изображений в одном формате. Нодные интерфейсы для проектов - огонь. gilf.app - генерирует видео Много различных пресетов для генерации видео. Очень дорогой. ...

December 23, 2025 · 2 min · Aleksei Aksenov
Spring Boot AI Chat Example

Spring Boot AI Chat Example

Initial setup with claude.ai Chat link Post that I was inspired from. First of all I have opened claude.ai and fed him an image and this prompt create spring boot app based on this image Here is the result generated with Claude import React, { useState } from 'react'; import { Copy, Check } from 'lucide-react'; const SpringAIApp = () => { const [copiedId, setCopiedId] = useState(null); const copyToClipboard = (text, id) => { navigator.clipboard.writeText(text); setCopiedId(id); setTimeout(() => setCopiedId(null), 2000); }; const CodeBlock = ({ title, code, id, language = "xml" }) => ( <div className="mb-6 bg-gray-800 rounded-lg overflow-hidden"> <div className="bg-gray-700 px-4 py-2 flex justify-between items-center"> <span className="text-sm font-semibold text-gray-200">{title}</span> <button onClick={() => copyToClipboard(code, id)} className="text-gray-300 hover:text-white transition-colors" > {copiedId === id ? <Check size={18} /> : <Copy size={18} />} </button> </div> <pre className="p-4 overflow-x-auto text-sm"> <code className={`language-${language}`}>{code}</code> </pre> </div> ); const pomXml = `<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.2.0</version> </parent> <groupId>com.example</groupId> <artifactId>spring-ai-chat</artifactId> <version>1.0.0</version> <properties> <java.version>17</java.version> <spring-ai.version>1.0.0</spring-ai.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!-- Spring AI Ollama Starter --> <dependency> <groupId>org.springframework.ai</groupId> <artifactId>spring-ai-starter-model-ollama</artifactId> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.ai</groupId> <artifactId>spring-ai-bom</artifactId> <version>\${spring-ai.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>`; const applicationProps = `# Server Configuration server.port=8080 # Spring AI Ollama Configuration spring.ai.ollama.base-url=http://localhost:11431 spring.ai.ollama.chat.model=qwen2.5:3b spring.ai.ollama.chat.options.temperature=0.4`; const applicationYml = `server: port: 8080 spring: ai: ollama: base-url: http://localhost:11431 chat: model: qwen2.5:3b options: temperature: 0.4`; const configClass = `package com.example.config; import org.springframework.ai.chat.client.ChatClient; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class ChatClientConfig { @Bean public ChatClient chatClient(ChatClient.Builder builder) { return builder .defaultSystem(""" You're Julius Caesar. Your role is to answer questions in your own unique style. When you receive a request from a user, you must answer them briefly (no more than 100 words). """) .build(); } }`; const controllerClass = `package com.example.controller; import org.springframework.ai.chat.client.ChatClient; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping("/") public class ChatController { private final ChatClient chatClient; public ChatController(ChatClient chatClient) { this.chatClient = chatClient; } @GetMapping("/chat") public String chat(@RequestParam String message) { return chatClient .prompt() .user(message) .call() .content(); } }`; const mainClass = `package com.example; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class SpringAiChatApplication { public static void main(String[] args) { SpringApplication.run(SpringAiChatApplication.class, args); } }`; return ( <div className="min-h-screen bg-gradient-to-br from-gray-900 via-blue-900 to-gray-900 text-white p-8"> <div className="max-w-5xl mx-auto"> <div className="text-center mb-12"> <h1 className="text-5xl font-bold mb-4 bg-gradient-to-r from-blue-400 to-green-400 bg-clip-text text-transparent"> Spring Boot AI Chat Application </h1> <p className="text-gray-300 text-lg"> Integration with Ollama using Spring AI </p> </div> <div className="space-y-8"> {/* Step 1 */} <div className="bg-gray-800/50 backdrop-blur-sm rounded-xl p-6 border border-gray-700"> <div className="flex items-center mb-4"> <span className="bg-blue-500 text-white rounded-full w-8 h-8 flex items-center justify-center font-bold mr-3"> 1 </span> <h2 className="text-2xl font-bold text-blue-400">Add Dependencies</h2> </div> <p className="text-gray-300 mb-4"> Add Spring AI Ollama starter to your pom.xml </p> <CodeBlock title="pom.xml" code={pomXml} id="pom" /> </div> {/* Step 2 */} <div className="bg-gray-800/50 backdrop-blur-sm rounded-xl p-6 border border-gray-700"> <div className="flex items-center mb-4"> <span className="bg-green-500 text-white rounded-full w-8 h-8 flex items-center justify-center font-bold mr-3"> 2 </span> <h2 className="text-2xl font-bold text-green-400">Configure Settings</h2> </div> <p className="text-gray-300 mb-4"> Set up Ollama connection in application.properties or application.yml </p> <CodeBlock title="application.properties" code={applicationProps} id="props" language="properties" /> <div className="mt-4"> <CodeBlock title="application.yml (Alternative)" code={applicationYml} id="yml" language="yaml" /> </div> </div> {/* Step 3 */} <div className="bg-gray-800/50 backdrop-blur-sm rounded-xl p-6 border border-gray-700"> <div className="flex items-center mb-4"> <span className="bg-purple-500 text-white rounded-full w-8 h-8 flex items-center justify-center font-bold mr-3"> 3 </span> <h2 className="text-2xl font-bold text-purple-400">Create ChatClient Bean</h2> </div> <p className="text-gray-300 mb-4"> Configure ChatClient with custom system prompt </p> <CodeBlock title="ChatClientConfig.java" code={configClass} id="config" language="java" /> </div> {/* Step 4 */} <div className="bg-gray-800/50 backdrop-blur-sm rounded-xl p-6 border border-gray-700"> <div className="flex items-center mb-4"> <span className="bg-orange-500 text-white rounded-full w-8 h-8 flex items-center justify-center font-bold mr-3"> 4 </span> <h2 className="text-2xl font-bold text-orange-400">Create REST Controller</h2> </div> <p className="text-gray-300 mb-4"> Build a simple chat endpoint </p> <CodeBlock title="ChatController.java" code={controllerClass} id="controller" language="java" /> </div> {/* Step 5 */} <div className="bg-gray-800/50 backdrop-blur-sm rounded-xl p-6 border border-gray-700"> <div className="flex items-center mb-4"> <span className="bg-pink-500 text-white rounded-full w-8 h-8 flex items-center justify-center font-bold mr-3"> 5 </span> <h2 className="text-2xl font-bold text-pink-400">Main Application Class</h2> </div> <CodeBlock title="SpringAiChatApplication.java" code={mainClass} id="main" language="java" /> </div> {/* Usage Instructions */} <div className="bg-gradient-to-r from-blue-600 to-purple-600 rounded-xl p-6 border border-blue-500"> <h2 className="text-2xl font-bold mb-4">🚀 How to Run</h2> <div className="space-y-3 text-gray-100"> <div className="flex items-start"> <span className="font-mono bg-black/30 px-2 py-1 rounded mr-3">1.</span> <span>Make sure Ollama is running on <code className="bg-black/30 px-2 py-1 rounded">localhost:11431</code></span> </div> <div className="flex items-start"> <span className="font-mono bg-black/30 px-2 py-1 rounded mr-3">2.</span> <span>Pull the model: <code className="bg-black/30 px-2 py-1 rounded">ollama pull qwen2.5:3b</code></span> </div> <div className="flex items-start"> <span className="font-mono bg-black/30 px-2 py-1 rounded mr-3">3.</span> <span>Run: <code className="bg-black/30 px-2 py-1 rounded">mvn spring-boot:run</code></span> </div> <div className="flex items-start"> <span className="font-mono bg-black/30 px-2 py-1 rounded mr-3">4.</span> <span>Test: <code className="bg-black/30 px-2 py-1 rounded">http://localhost:8080/chat?message=Tell me about the calendar</code></span> </div> </div> </div> {/* Project Structure */} <div className="bg-gray-800/50 backdrop-blur-sm rounded-xl p-6 border border-gray-700"> <h2 className="text-2xl font-bold mb-4 text-cyan-400">📁 Project Structure</h2> <pre className="bg-black/50 p-4 rounded-lg text-sm text-gray-300 overflow-x-auto"> {`spring-ai-chat/ ├── src/ │ ├── main/ │ │ ├── java/ │ │ │ └── com/ │ │ │ └── example/ │ │ │ ├── SpringAiChatApplication.java │ │ │ ├── config/ │ │ │ │ └── ChatClientConfig.java │ │ │ └── controller/ │ │ │ └── ChatController.java │ │ └── resources/ │ │ └── application.properties │ └── test/ └── pom.xml`} </pre> </div> </div> <div className="mt-12 text-center text-gray-400 text-sm"> <p>Built with Spring Boot 3.2.0 and Spring AI 1.0.0</p> </div> </div> </div> ); }; export default SpringAIApp; Then I create an empty project directory, copy generated file to it and moved to Cline with such a prompt. ...

December 21, 2025 · 5 min · Aleksei Aksenov
Тестирование локальных АИ моделей и из облака

Мой опыт настройки и использования AI инструментов в разработке

Способы написания кода с АИ Принципиально есть несколько способов писать код с использованием AI вставлять части кода в DeepSeek, Qwen, Perplexity, Gemini купить подписку за 20 USD на сервис Cursor, Calude, Kiro, Antigravity и тд. установить LLM на сервер платить за используемые токены Ниже будет описание того, чем лично пользовался и что, по моему мнению заслуживает внимания. Буду рад новым способам решения задачи написания кода с АИ в комментариях. Вставлять части кода Рабочий вариант, если нет возможности использовать другие. Основные преимущества ...

December 20, 2025 · 10 min · Aleksei Aksenov