open ai service

This commit is contained in:
2026-02-08 12:05:05 +06:00
parent d7722ad81d
commit 3209827e92
29 changed files with 2175 additions and 0 deletions

31
books_flutter/.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,31 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Flutter",
"type": "dart",
"request": "launch",
"program": "lib/main.dart"
},
{
"name": "books_flutter",
"request": "launch",
"type": "dart"
},
{
"name": "books_flutter (profile mode)",
"request": "launch",
"type": "dart",
"flutterMode": "profile"
},
{
"name": "books_flutter (release mode)",
"request": "launch",
"type": "dart",
"flutterMode": "release"
}
]
}

300
books_flutter/CLAUDE.md Normal file
View File

@@ -0,0 +1,300 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
This is a Flutter mobile application for managing a personal book library. The app features book cataloging, categorization, reading status tracking, and cover scanning capabilities (via camera + Gemini AI).
**Tech Stack:**
- Flutter SDK ^3.10.8
- State Management: flutter_bloc ^9.1.0
- UI: Material 3 with Google Fonts (Inter)
- AI: google_generative_ai ^0.4.6 (for book cover analysis)
- Camera: camera ^0.11.1
## Development Commands
### Running the App
```bash
# Run on connected device/simulator
flutter run
# Run on specific device
flutter run -d <device-id>
# Run in release mode
flutter run --release
# List available devices
flutter devices
```
### Building
```bash
# Build APK for Android
flutter build apk
# Build iOS
flutter build ios
# Build for web (not currently configured)
flutter build web
```
### Code Quality
```bash
# Install/update dependencies
flutter pub get
# Check for outdated packages
flutter pub outdated
# Run static analysis
flutter analyze
# Format all Dart files
flutter format lib/
# Run tests
flutter test
```
### Using Dart MCP Tools
When the Dart MCP server is available, prefer using these tools instead of bash commands:
- `mcp__dart__analyze_files` instead of `flutter analyze`
- `mcp__dart__dart_format` instead of `flutter format`
- `mcp__dart__run_tests` instead of `flutter test`
- `mcp__dart__list_devices` to see available devices
- `mcp__dart__launch_app` to run the app with DTD integration
## Architecture
### State Management (BLoC Pattern)
The app uses **flutter_bloc** for state management with two main BLoCs:
#### 1. BookBloc (`lib/bloc/book_bloc.dart`)
Manages the book collection state and operations:
- **State**: `BookState` containing `List<Book>`
- **Events**:
- `AddBook(book)` - Add new book to library
- `UpdateBook(book)` - Update existing book
- `DeleteBook(id)` - Remove book from library
- `ToggleFavorite(id)` - Toggle favorite status
- **Initial State**: Loads from `initialBooks` in `constants.dart`
- **Note**: Currently uses in-memory storage; no persistence layer
#### 2. NavigationBloc (`lib/bloc/navigation_bloc.dart`)
Manages app navigation and screen state:
- **State**: `NavigationState` with:
- `screen` (AppScreen enum) - Current screen
- `selectedBook` - Book being viewed/edited
- `prefilledData` - Data for pre-populating forms
- **Event**: `NavigateTo(screen, {selectedBook, prefilledData})`
- **Pattern**: Declarative navigation where UI rebuilds based on state
- **Important**: `prefilledData` is used when scanning covers to prefill book form, while `selectedBook` is used for editing existing books
### Data Models (`lib/models/models.dart`)
Uses Dart 3 **record types** for immutability:
```dart
typedef Book = ({
String id,
String title,
String author,
String genre,
String annotation,
String? coverUrl,
int? pages,
String? language,
int? publishedYear,
double? rating,
String status, // 'reading', 'done', 'want_to_read'
double? progress, // 0-100 for reading progress
bool isFavorite,
});
```
**Important**: Records are immutable. To update a book, create a new record with updated fields using record syntax:
```dart
final updatedBook = (
id: book.id,
title: newTitle,
// ... copy all other fields
);
```
### Navigation Flow
The app uses a custom navigation system via `NavigationBloc`:
1. **Library Screen** (default) → Shows all books in grid/category view
2. **Categories Screen** → Browse books by predefined categories
3. **Book Details** → View/edit single book (triggered by tapping book card)
4. **Add/Edit Book** → Form for adding new books or editing existing
5. **Scanner Screen** → Camera interface for scanning book covers
6. **Wishlist/Settings** → Placeholder screens
**Navigation Pattern:**
```dart
context.read<NavigationBloc>().add(
NavigateTo(AppScreen.details, selectedBook: book)
);
```
The main shell (`_AppShell` in `main.dart`) rebuilds based on `NavigationState.screen`.
### Theme System (Material 3)
**Critical**: This app uses a **centralized theme system**. Never hardcode colors, text styles, or spacing.
**Theme Files:**
- `lib/theme/app_colors.dart` - Semantic color constants (cyan-based palette)
- `lib/theme/app_spacing.dart` - Spacing scale (8px base) and border radius
- `lib/theme/app_theme.dart` - Material 3 ThemeData with component themes
**Usage Pattern:**
```dart
final colorScheme = Theme.of(context).colorScheme;
final textTheme = Theme.of(context).textTheme;
// Use semantic colors
Container(color: colorScheme.primary)
// Use text styles
Text('Title', style: textTheme.displayMedium)
// Use spacing constants
Padding(padding: EdgeInsets.all(AppSpacing.md))
// Use shadows
BoxDecoration(boxShadow: AppTheme.shadowMd)
```
**Color Scheme:**
- Primary: #0891B2 (Cyan-600)
- Success/CTA: #22C55E (Green-500)
- Background: #ECFEFF (Cyan-50)
- Surface: #FFFFFF
**Typography:** Inter font family loaded via Google Fonts with weights 300-700.
### Screen Structure
All main screens follow this pattern:
1. Wrap in `SafeArea` for notch/status bar handling
2. Use `BlocBuilder` to listen to relevant BLoC state
3. Access theme via `Theme.of(context)`
4. Use `AppSpacing.*` constants for all padding/margins
5. Use theme colors and text styles exclusively
**Example:**
```dart
class MyScreen extends StatelessWidget {
@override
Widget build(BuildContext context) {
final colorScheme = Theme.of(context).colorScheme;
final textTheme = Theme.of(context).textTheme;
return SafeArea(
child: BlocBuilder<BookBloc, BookState>(
builder: (context, state) {
return Padding(
padding: EdgeInsets.all(AppSpacing.lg),
child: Text(
'Hello',
style: textTheme.headlineMedium,
),
);
},
),
);
}
}
```
### Widget Conventions
**BookCard** (`lib/widgets/book_card.dart`):
- Displays book cover with metadata overlay
- Includes Hero animation with tag `'book-cover-${book.id}'`
- Shows shimmer loading while image loads
- Has hover effect on desktop/web (1.02 scale)
- Displays favorite badge, status badge, or progress bar based on book state
**Hero Animations:**
When navigating from library to book details, book covers animate smoothly:
```dart
Hero(
tag: 'book-cover-${book.id}',
child: Image.network(book.coverUrl),
)
```
Both `BookCard` and `BookDetailsScreen` must use matching tags.
### Gemini AI Integration
The `GeminiService` (`lib/services/gemini_service.dart`) is a placeholder for future AI-powered book cover scanning:
- Takes base64-encoded image from camera
- Will analyze cover and extract metadata (title, author, etc.)
- Returns `Book?` record with prefilled data
- Currently returns `null` - implementation pending
**Intended Flow:**
1. User opens Scanner Screen
2. Takes photo of book cover
3. Image sent to `GeminiService.analyzeBookCover()`
4. Extracted data passed to Add Book screen via `NavigationBloc` with `prefilledData`
## Important Patterns
### When Adding New Features
1. **New Book Fields**: Update the `Book` typedef in `models.dart` and all places that construct book records
2. **New Screens**: Add to `AppScreen` enum, handle in `_AppShell` switch statement
3. **Theme Changes**: Only modify theme files, never inline styles
4. **Navigation**: Always use `NavigationBloc`, never `Navigator.push()`
### Code Style Requirements
- **Immutability**: Use records for data models, never mutable classes
- **Theme Compliance**: Zero hardcoded colors/styles/spacing
- **BLoC Pattern**: UI is always a pure function of state
- **Const Constructors**: Use `const` for all stateless widgets and values
- **Reduced Motion**: Check `MediaQuery.of(context).disableAnimations` for animations
### Testing Gotchas
- Books are stored in-memory only; restarting app resets to `initialBooks`
- Camera requires physical device or simulator with camera support
- Gemini API requires valid API key (not implemented yet)
- Hero animations require matching tags between screens
## Project-Specific Notes
### Why Records Instead of Classes?
This codebase uses Dart 3 record types for immutability and simplicity. When updating books, create new records rather than mutating fields. This makes BLoC state updates predictable and prevents accidental mutations.
### Navigation Without Navigator
The app doesn't use Flutter's built-in Navigator. Instead, `NavigationBloc` tracks the current screen, and `_AppShell` rebuilds the entire UI tree based on state. This gives centralized control over navigation state but means:
- No native back button handling (would need to emit `NavigateTo` events)
- No deep linking support (yet)
- All screens must be handled in the main switch statement
### Initial Data
Books are initialized from `initialBooks` constant in `lib/constants/constants.dart`. Categories are defined in the same file. To add sample data, modify these constants.
### Future Enhancements
Based on the codebase structure, likely next steps:
- Implement persistence (SharedPreferences, SQLite, or Firebase)
- Complete Gemini AI integration for cover scanning
- Add native back button handling
- Implement book search/filtering
- Add reading statistics/charts
- Support for book series and collections

View File

@@ -0,0 +1,133 @@
# OpenAI Service Setup Guide
This document explains how to configure and use the OpenAI service for parsing book metadata from images.
## Overview
The Bookshelf app now supports two AI services for book cover analysis:
1. **Google Gemini** (original service)
2. **OpenAI** (new alternate service)
The app will automatically try OpenAI first if configured, and fall back to Gemini if OpenAI fails or is not configured.
## Configuration
### Step 1: Configure API Keys
Edit `lib/config/api_config.dart`:
```dart
class ApiConfig {
// Gemini API (original service)
static const String geminiApiKey = 'YOUR_GEMINI_API_KEY_HERE';
// OpenAI API (new service)
static const String openaiApiKey = 'YOUR_OPENAI_API_KEY_HERE';
// OpenAI API endpoint
static const String openaiBaseUrl = 'http://localhost:8317';
}
```
### Step 2: Replace API Keys
Replace the placeholder values with your actual API keys:
- **OpenAI API Key**: Get from your OpenAI account or local OpenAI-compatible server
- **Gemini API Key**: Get from [Google AI Studio](https://makersuite.google.com/app/apikey) (fallback)
## OpenAI Service Details
### Endpoint Configuration
The OpenAI service is configured to use:
- **Default endpoint**: `http://localhost:8317/v1/chat/completions`
- **Model**: `glm-4` (vision-capable)
- **Max tokens**: 500
- **Temperature**: 0.3
**Important**: The server must support the `glm-4` model. This is a required model name for this OpenAI-compatible endpoint.
You can customize the endpoint by changing the `openaiBaseUrl` in `api_config.dart`.
### How It Works
1. The app captures an image using the camera
2. The image is converted to base64 format
3. A prompt is sent to the OpenAI API with the image
4. The API analyzes the image and extracts:
- Book title
- Author name
- Genre (fiction/fantasy/science/detective/biography/other)
- Brief annotation/description
5. The parsed data is returned and pre-filled in the add book form
### Service Priority
The app follows this priority order:
1. **OpenAI** (if API key is configured) - tried first
2. **Gemini** (if OpenAI fails or is not configured) - fallback
This ensures you always have a working service available.
## Testing
To test the OpenAI service:
1. Make sure your OpenAI server is running at `http://localhost:8317`
2. Configure your OpenAI API key in `api_config.dart`
3. Run the app: `flutter run`
4. Navigate to the "Add Book" screen
5. Tap the camera icon to scan a book cover
6. The app will use OpenAI to analyze the image
## Troubleshooting
### "API ключ не настроен (ни OpenAI, ни Gemini)"
This error means neither OpenAI nor Gemini API keys are configured. At least one must be set in `api_config.dart`.
### "Не удалось распознать книгу"
This can occur if:
- The API request failed (check your server logs)
- The image quality is poor
- The API returned invalid JSON
- Network connectivity issues
### OpenAI Service Not Working
If OpenAI fails, the app will automatically fall back to Gemini. Check the console output to see which service is being used:
```
Using OpenAI service for analysis
```
or
```
Using Gemini service for analysis
```
### Network Issues
Make sure:
- Your OpenAI server is accessible from the device/emulator
- For Android emulator: Use `10.0.2.2` instead of `localhost`
- For iOS simulator: `localhost` should work
- For physical device: Use your machine's actual IP address
## Files Modified
1. **lib/services/openai_service.dart** - New OpenAI service implementation
2. **lib/config/api_config.dart** - Added OpenAI configuration
3. **lib/screens/scanner_screen.dart** - Updated to support both services
4. **lib/screens/add_book_screen.dart** - Updated to pass OpenAI configuration
5. **pubspec.yaml** - Added `http` package dependency
## Future Enhancements
Possible improvements:
- Add UI option to manually select which service to use
- Add retry logic with different services
- Implement caching of recognized books
- Add support for multiple OpenAI models
- Add detailed error messages and logging

View File

@@ -0,0 +1,241 @@
# Refactoring Summary: BLoC Architecture Improvements
## Overview
All screens have been refactored to follow best practices with dedicated BLoCs and separate event/state files. This improves code organization, testability, and maintainability.
## Architecture Pattern
Each screen now follows this structure:
```
lib/bloc/
├── [feature]_event.dart # Event definitions
├── [feature]_state.dart # State definitions
└── [feature]_bloc.dart # Business logic
lib/screens/
└── [feature]_screen.dart # UI only (stateless)
```
## Changes Made
### 1. BookBloc (Refactored)
**Files Created:**
- `lib/bloc/book_event.dart` - Events: AddBook, UpdateBook, DeleteBook, ToggleFavorite
- `lib/bloc/book_state.dart` - State containing List<Book>
**Changes:**
- Separated events and state from main BLoC file
- BLoC handles global book collection management
- Used across all screens for book data access
### 2. ScannerBloc (New)
**Files Created:**
- `lib/bloc/scanner_event.dart` - Events: InitializeCamera, CaptureAndAnalyze, SwitchCamera, DismissError
- `lib/bloc/scanner_state.dart` - State: isInitialized, isCapturing, isAnalyzing, hasPermissionError, errorMessage, analyzedBook
- `lib/bloc/scanner_bloc.dart` - Camera and AI analysis business logic
**Screen Changes:**
- `lib/screens/scanner_screen.dart` converted from StatefulWidget to StatelessWidget
- Removed all setState() calls and local state management
- Uses BlocProvider for state management
- Uses BlocListener for side effects (errors, navigation)
- Uses BlocBuilder for reactive UI
**Business Logic Moved to BLoC:**
- Camera initialization and permission handling
- Image capture process
- AI service selection (OpenAI first, Gemini fallback)
- Error state management
- Temporary file cleanup
### 3. LibraryBloc (New)
**Files Created:**
- `lib/bloc/library_event.dart` - Events: UpdateSearchQuery, ChangeTab
- `lib/bloc/library_state.dart` - State: searchQuery, tabIndex
- `lib/bloc/library_bloc.dart` - Search and tab management logic
**Screen Changes:**
- `lib/screens/library_screen.dart` converted from StatefulWidget to StatelessWidget
- Removed local state (_search, _tabIndex)
- Uses LibraryBloc for UI state
- Uses BookBloc for book data
- Nested BlocBuilders for optimal rebuilds
**Business Logic Moved to BLoC:**
- Search query management
- Tab selection state
- Book filtering logic (still in UI, but uses BLoC state)
### 4. AddBookBloc (New)
**Files Created:**
- `lib/bloc/add_book_event.dart` - Events: InitializeForm, UpdateTitle, UpdateAuthor, UpdateAnnotation, UpdateGenre, ApplyScannedBook, SaveBook
- `lib/bloc/add_book_state.dart` - State: title, author, annotation, genre, editBook, isSaved
- `lib/bloc/add_book_bloc.dart` - Form management and save logic
**Screen Changes:**
- `lib/screens/add_book_screen.dart` converted outer widget to StatelessWidget
- Created internal StatefulWidget for TextController lifecycle
- Uses BlocProvider with callbacks to BookBloc
- Uses BlocListener to update controllers and handle navigation
- Uses BlocBuilder for reactive form state
**Business Logic Moved to BLoC:**
- Form field state management
- Edit vs Add mode detection
- Scanned book data application
- Book creation/update logic with proper field mapping
- Save completion state
### 5. BookDetailsScreen (No Changes)
**Status:** Already stateless and has minimal business logic
- Displays book data passed as parameter
- Navigates to edit screen
- Calls BookBloc for delete operation
- No dedicated BLoC needed as it's a simple presentation screen
## Benefits
### ✅ Separation of Concerns
- UI components only handle presentation
- Business logic isolated in BLoCs
- Clear boundaries between layers
### ✅ Testability
- BLoCs can be unit tested independently
- No UI dependencies in business logic
- Events and states are simple data classes
### ✅ Maintainability
- Each file has single responsibility
- Easy to locate and modify logic
- Consistent pattern across all screens
### ✅ Scalability
- Easy to add new events and states
- BLoCs can be reused across screens
- State changes are predictable and traceable
### ✅ Reduced Boilerplate
- No manual setState() management
- Automatic UI rebuilds on state changes
- Side effects handled declaratively
## File Structure
```
lib/
├── bloc/
│ ├── book_event.dart # Book collection events
│ ├── book_state.dart # Book collection state
│ ├── book_bloc.dart # Book collection logic
│ ├── scanner_event.dart # Scanner events
│ ├── scanner_state.dart # Scanner state
│ ├── scanner_bloc.dart # Scanner logic
│ ├── library_event.dart # Library UI events
│ ├── library_state.dart # Library UI state
│ ├── library_bloc.dart # Library UI logic
│ ├── add_book_event.dart # Add/Edit book events
│ ├── add_book_state.dart # Add/Edit book state
│ └── add_book_bloc.dart # Add/Edit book logic
├── screens/
│ ├── library_screen.dart # Stateless - uses LibraryBloc + BookBloc
│ ├── scanner_screen.dart # Stateless - uses ScannerBloc
│ ├── add_book_screen.dart # Stateless wrapper + Stateful content
│ └── book_details_screen.dart # Stateless - no dedicated BLoC
└── ...
```
## Migration Guide
### Before (StatefulWidget with setState):
```dart
class MyScreen extends StatefulWidget {
@override
State<MyScreen> createState() => _MyScreenState();
}
class _MyScreenState extends State<MyScreen> {
String _value = '';
void _updateValue(String newValue) {
setState(() => _value = newValue);
}
@override
Widget build(BuildContext context) {
return Text(_value);
}
}
```
### After (StatelessWidget with BLoC):
```dart
// Event
class UpdateValue extends MyEvent {
final String value;
UpdateValue(this.value);
}
// State
class MyState {
final String value;
const MyState({this.value = ''});
MyState copyWith({String? value}) => MyState(value: value ?? this.value);
}
// BLoC
class MyBloc extends Bloc<MyEvent, MyState> {
MyBloc() : super(const MyState()) {
on<UpdateValue>((event, emit) => emit(state.copyWith(value: event.value)));
}
}
// Screen
class MyScreen extends StatelessWidget {
@override
Widget build(BuildContext context) {
return BlocProvider(
create: (_) => MyBloc(),
child: BlocBuilder<MyBloc, MyState>(
builder: (context, state) => Text(state.value),
),
);
}
}
```
## Testing Recommendations
### Unit Tests for BLoCs:
```dart
test('UpdateSearchQuery updates search query', () {
final bloc = LibraryBloc();
bloc.add(UpdateSearchQuery('test'));
expect(bloc.state.searchQuery, 'test');
});
```
### Widget Tests for Screens:
```dart
testWidgets('LibraryScreen displays books', (tester) async {
await tester.pumpWidget(
MultiBlocProvider(
providers: [
BlocProvider(create: (_) => BookBloc()),
BlocProvider(create: (_) => LibraryBloc()),
],
child: MaterialApp(home: LibraryScreen()),
),
);
expect(find.byType(BookCard), findsWidgets);
});
```
## Next Steps
1. Add unit tests for all BLoCs
2. Add widget tests for all screens
3. Consider adding integration tests
4. Monitor performance and optimize if needed
5. Document any screen-specific BLoC patterns

View File

@@ -0,0 +1,131 @@
# Book Scanner Setup Guide
The book scanning feature allows users to scan book covers using their device camera and automatically extract book information using Google Gemini AI.
## Prerequisites
1. **Google Gemini API Key**: Get your free API key from [Google AI Studio](https://makersuite.google.com/app/apikey)
2. **Device with camera**: The feature requires a camera (front or back)
3. **Camera permissions**: Users must grant camera access when prompted
## Setup Instructions
### 1. Add Your Gemini API Key
Edit the `lib/config/api_config.dart` file and replace the placeholder:
```dart
class ApiConfig {
// TODO: Replace with your actual Gemini API key
static const String geminiApiKey = 'YOUR_GEMINI_API_KEY_HERE';
}
```
Replace `YOUR_GEMINI_API_KEY_HERE` with your actual Google Gemini API key.
### 2. Permissions
The app automatically requests camera permissions. However, you may need to configure platform-specific settings:
#### Android
- Camera permissions are already configured in `android/app/src/main/AndroidManifest.xml`
- No additional setup required
#### iOS
- Camera usage description is configured in `ios/Runner/Info.plist`
- The app will request camera permission when first launched
## How It Works
1. **Camera Preview**: The scanner screen shows a live camera preview with a scanning frame
2. **Capture**: Users tap the capture button to take a photo of the book cover
3. **AI Analysis**: The image is sent to Google Gemini AI for analysis
4. **Book Extraction**: Gemini extracts:
- Book title
- Author name
- Genre (categorized into: fiction, fantasy, science, detective, biography, other)
- Annotation/description
5. **Auto-fill**: The extracted information automatically fills the book form
## Usage
1. Open the "Add Book" screen
2. Tap the camera/scanner area
3. Grant camera permissions if prompted
4. Position the book cover within the scanning frame
5. Ensure the text is clearly visible and readable
6. Tap the capture button (large white circle)
7. Wait for the AI analysis (2-5 seconds)
8. Review and edit the auto-filled information if needed
9. Save the book
## Tips for Better Scanning
- Ensure good lighting
- Hold the device steady
- Position the book cover within the green scanning frame
- Make sure text is not blurred or obscured
- Use high contrast books (avoid glare or reflections)
- Try different angles if the first scan doesn't work
## Troubleshooting
### Camera not working
- Check if camera permissions are granted
- Close other apps that might be using the camera
- Restart the app
### Scanning fails or produces incorrect results
- Ensure the book cover text is clearly visible
- Try scanning in better lighting conditions
- Some covers with complex designs may be harder to recognize
- You can always manually edit the extracted information
### API errors
- Verify your Gemini API key is correctly configured
- Check your internet connection
- Ensure you have available API quota (free tier is generous)
## Technical Details
### Services Created
1. **CameraService** (`lib/services/camera_service.dart`)
- Manages camera initialization and lifecycle
- Handles permissions
- Provides image capture functionality
2. **GeminiService** (`lib/services/gemini_service.dart`)
- Integrates with Google Gemini AI
- Processes book cover images
- Extracts structured book metadata
- Handles error cases gracefully
### Dependencies Added
- `camera: ^0.11.1` - Camera functionality
- `google_generative_ai: ^0.4.6` - Gemini AI integration
- `permission_handler: ^11.0.0` - Permission management
### Privacy & Security
- Images are sent to Google's servers for AI analysis
- Temporary images are deleted after processing
- API keys should be kept secure and not committed to version control
- Consider using environment variables for API keys in production
## Cost Considerations
- Google Gemini API has a generous free tier
- Typical book scan uses minimal tokens
- Monitor your API usage in the Google Cloud Console if needed
## Future Enhancements
Potential improvements to consider:
- Barcode/ISBN scanning as alternative
- Offline scanning capability
- Batch scanning for multiple books
- Image quality enhancement before sending to AI
- Support for multiple languages
- Custom AI prompts for better recognition

View File

@@ -0,0 +1,102 @@
# Testing OpenAI Service
This guide explains how to test the OpenAI service with the sample book cover image.
## Quick Test (Recommended)
Use the standalone test script for quick testing:
### Step 1: Set Your API Key
Edit `test_openai_service.dart` and replace the placeholder:
```dart
const apiKey = 'YOUR_OPENAI_API_KEY_HERE';
```
Replace `YOUR_OPENAI_API_KEY_HERE` with your actual OpenAI API key.
### Step 2: Run the Test
From the project root directory, run:
```bash
dart run test_openai_service.dart
```
## Expected Output
If successful, you'll see:
```
========================================
📖 Testing OpenAI Book Cover Analysis
========================================
Image path: samples/photo_2026-02-07_15-05-17.jpg
Image size: XXXXXX bytes
API endpoint: http://localhost:8317/v1/chat/completions
Analyzing book cover... (this may take a few seconds)
========================================
✅ Successfully analyzed book cover!
========================================
📚 Book Details:
Title: [Book Title]
Author: [Author Name]
Genre: [Genre]
Annotation: [Book description]
Language: Russian
Published Year: 2026
Rating: 5.0
========================================
```
## Troubleshooting
### Error: "Please set your OpenAI API key"
You need to edit `test_openai_service.dart` and add your actual API key.
### Error: "Image file not found"
Make sure you're running the test from the project root directory where the `samples/` folder is located.
### Error: "Failed to analyze book cover"
Check the following:
1. **Server Running**: Ensure your OpenAI server is running at `http://localhost:8317`
2. **API Key**: Verify your API key is correct
3. **Server Logs**: Check your OpenAI server logs for errors
4. **Model Support**: Ensure your server supports the `glm-4` model
5. **Network**: Check network connectivity
### Testing on Different Environments
If you're testing from a different environment (not the same machine running the server), update the `baseUrl`:
```dart
const baseUrl = 'http://YOUR_SERVER_IP:8317'; // For remote server
// or
const baseUrl = 'http://10.0.2.2:8317'; // For Android emulator
```
## Running Formal Flutter Tests
If you prefer to run the formal Flutter test:
```bash
flutter test test/openai_service_test.dart
```
Make sure to update the API key in `test/openai_service_test.dart` before running.
## Sample Image
The test uses the sample image at: `samples/photo_2026-02-07_15-05-17.jpg`
You can replace this with any book cover image you want to test.

View File

@@ -0,0 +1,113 @@
# Open Book Icon - Exact Specifications
## Design Layout (1024x1024px)
```
┌─────────────────────────────────┐
│ │
│ (padding: 150px) │
│ │
│ ┌───────────────────┐ │
│ │ │ │
│ │ 📖 Open Book │ │
│ │ White (#FFF) │ │
│ │ ~700px width │ │
│ │ │ │
│ └───────────────────┘ │
│ │
│ Background: #0891B2 │
│ │
└─────────────────────────────────┘
```
## Two Files Needed
### 1. app_icon.png (Complete Icon)
- Size: 1024x1024px
- Background: Solid cyan `#0891B2`
- Icon: White open book, centered
- Icon size: ~700px wide, maintains aspect ratio
- Padding: ~150px from edges
### 2. app_icon_foreground.png (Adaptive Icon Foreground)
- Size: 1024x1024px
- Background: **Transparent**
- Icon: Same white book as above
- Keep in "safe zone": center 66% of canvas (~676x676px)
- Android will add the cyan background automatically
---
## Color Codes (Copy-Paste Ready)
- Cyan Background: `#0891B2` or `rgb(8, 145, 178)`
- Icon Color: `#FFFFFF` or `rgb(255, 255, 255)`
---
## Recommended Free Book Icons (Download & Use)
### Option A: Heroicons Book-Open (Clean, Modern)
- URL: https://heroicons.com
- Search: "book-open"
- Style: Outline (recommended) or Solid
- License: MIT (free to use)
### Option B: Lucide Book-Open (Minimal)
- URL: https://lucide.dev/icons/book-open
- Very clean, minimal design
- License: ISC (free to use)
### Option C: Bootstrap Icons Book (Simple)
- URL: https://icons.getbootstrap.com/icons/book/
- Several book variants available
- License: MIT
---
## Quick Canva Instructions
1. **Create design:**
- Go to Canva → "Custom size" → 1024 x 1024
2. **Version 1 (app_icon.png):**
- Background: Click background → Color → `#0891B2`
- Elements → Upload downloaded SVG book icon
- Change icon color to white
- Resize to 600-700px, center it
- Download → PNG → save as `app_icon.png`
3. **Version 2 (app_icon_foreground.png):**
- Duplicate the design
- Remove background (make transparent)
- Keep only the white book
- Download → PNG → **check "Transparent background"**
- Save as `app_icon_foreground.png`
---
## What the Final Icon Will Look Like
**On Home Screen:**
- iOS: Rounded square with cyan background + white book
- Android (modern): Adaptive shape (circle/squircle/square) with cyan background + white book
- Android (old): Rounded square like iOS
**Visual Balance:**
- Book icon should be easily recognizable even at 40x40px
- Good contrast ensures readability
- White on cyan matches your app's Material 3 theme
---
## Alternative: Use Icon Generator
If you prefer automated approach:
1. Go to [icon.kitchen](https://icon.kitchen)
2. Upload any book icon image
3. Set background color: `#0891B2`
4. Adjust size/position
5. Download all sizes
(But manual creation gives you more control!)

View File

@@ -0,0 +1,137 @@
# App Icon Creation Guide
Your icon system is now configured! You need to create **two icon images** and place them in this directory.
## Required Files
1. **app_icon.png** (1024x1024px) - Main icon for iOS
2. **app_icon_foreground.png** (1024x1024px) - Foreground for Android adaptive icon
## Design Recommendations
**Theme Colors:**
- Primary Cyan: `#0891B2` (already set as adaptive background)
- Success Green: `#22C55E`
- White: `#FFFFFF`
**Style:** Clean, minimalistic, book-themed
---
## Option 1: Free Online Icon Makers (Easiest)
### A) Canva (Recommended)
1. Go to [canva.com](https://www.canva.com)
2. Create custom size: 1024x1024px
3. Use their free templates or design from scratch:
- Search "book icon" or "library icon"
- Change background color to `#0891B2`
- Add white book symbol/icon
4. Download as PNG (both with and without background)
### B) Figma (More Control)
1. Go to [figma.com](https://www.figma.com) (free account)
2. Create 1024x1024 frame
3. Design suggestions:
- **Simple:** Rectangle with rounded corners (#0891B2) + white book emoji 📚
- **Modern:** Gradient (cyan to teal) + minimalist book outline
- **Detailed:** Stack of 3 books with slight perspective
### C) App Icon Generators
- [appicon.co](https://appicon.co) - Upload an image, generates all sizes
- [makeappicon.com](https://makeappicon.com) - Similar service
- [iconkitchen.com](https://icon.kitchen) - Android Studio's web tool
---
## Option 2: Quick Placeholder (For Testing)
Create a simple solid color icon with text:
1. Use any image editor (even Preview on Mac)
2. Create 1024x1024px canvas
3. Fill with `#0891B2`
4. Add white text: "📚" or "BL" (Book Library)
5. Save as `app_icon.png` and `app_icon_foreground.png`
---
## Option 3: AI-Generated Icon
Use AI tools to generate:
- **ChatGPT/DALL-E**: "Create a minimalistic app icon for a book library app, cyan background (#0891B2), white book symbol, 1024x1024"
- **Midjourney**: "minimalist book library app icon, cyan gradient, white geometric book, flat design, 1024x1024"
---
## What Each File Does
### app_icon.png
- Used for iOS (all sizes)
- Used as fallback for older Android devices
- Should be a **complete icon** (background + foreground)
### app_icon_foreground.png
- Android's adaptive icon foreground layer
- Should be **transparent background** with just the icon symbol
- Android will apply the cyan background automatically
- Keep important elements in the "safe zone" (center ~66% of canvas)
---
## Once You Have Your Icons
1. Save both PNG files in this directory:
- `assets/icon/app_icon.png`
- `assets/icon/app_icon_foreground.png`
2. Run these commands:
```bash
flutter pub get
dart run flutter_launcher_icons
```
3. Verify the icons were generated:
```bash
# Check Android icons
ls android/app/src/main/res/mipmap-*/
# Check iOS icons
ls ios/Runner/Assets.xcassets/AppIcon.appiconset/
```
4. Build and test:
```bash
flutter run --profile
```
---
## Design Tips
**DO:**
- Use simple, recognizable shapes
- Ensure good contrast (white on cyan works great)
- Test at small sizes (looks good as 40x40?)
- Keep foreground centered for adaptive icons
- Use vector shapes when possible
**DON'T:**
- Use gradients that look muddy when small
- Add tiny text (won't be readable)
- Use too many colors (stick to 2-3)
- Put important details near edges (Android will crop)
---
## Quick Start Suggestion
**Easiest path:**
1. Open Canva → Custom 1024x1024
2. Add cyan (#0891B2) background
3. Add white book icon from their library
4. Export as PNG → save as `app_icon.png`
5. Remove background → export → save as `app_icon_foreground.png`
6. Run `dart run flutter_launcher_icons`
**Total time: ~5 minutes**

View File

@@ -0,0 +1,97 @@
import 'dart:math';
import 'package:flutter_bloc/flutter_bloc.dart';
import '../../models/models.dart';
import 'add_book_event.dart';
import 'add_book_state.dart';
class AddBookBloc extends Bloc<AddBookEvent, AddBookState> {
final void Function(Book book) onAddBook;
final void Function(Book book) onUpdateBook;
AddBookBloc({required this.onAddBook, required this.onUpdateBook})
: super(const AddBookState()) {
on<InitializeForm>(_onInitializeForm);
on<UpdateTitle>(_onUpdateTitle);
on<UpdateAuthor>(_onUpdateAuthor);
on<UpdateAnnotation>(_onUpdateAnnotation);
on<UpdateGenre>(_onUpdateGenre);
on<ApplyScannedBook>(_onApplyScannedBook);
on<SaveBook>(_onSaveBook);
}
void _onInitializeForm(InitializeForm event, Emitter<AddBookState> emit) {
final source = event.editBook ?? event.prefilledData;
if (source != null) {
emit(
AddBookState(
title: source.title,
author: source.author,
annotation: source.annotation,
genre: source.genre.isNotEmpty ? source.genre : 'fiction',
editBook: event.editBook,
),
);
} else if (event.editBook != null) {
emit(state.copyWith(editBook: event.editBook));
}
}
void _onUpdateTitle(UpdateTitle event, Emitter<AddBookState> emit) {
emit(state.copyWith(title: event.title));
}
void _onUpdateAuthor(UpdateAuthor event, Emitter<AddBookState> emit) {
emit(state.copyWith(author: event.author));
}
void _onUpdateAnnotation(UpdateAnnotation event, Emitter<AddBookState> emit) {
emit(state.copyWith(annotation: event.annotation));
}
void _onUpdateGenre(UpdateGenre event, Emitter<AddBookState> emit) {
emit(state.copyWith(genre: event.genre));
}
void _onApplyScannedBook(ApplyScannedBook event, Emitter<AddBookState> emit) {
final scanned = event.scannedBook;
emit(
state.copyWith(
title: scanned.title,
author: scanned.author,
annotation: scanned.annotation,
genre: scanned.genre.isNotEmpty ? scanned.genre : 'fiction',
),
);
}
void _onSaveBook(SaveBook event, Emitter<AddBookState> emit) {
final existing = state.editBook;
final isEditing = existing != null;
final Book book = (
id: isEditing ? existing.id : '${Random().nextInt(100000)}',
title: state.title,
author: state.author,
genre: state.genre,
annotation: state.annotation,
coverUrl: isEditing
? existing.coverUrl
: 'https://picsum.photos/seed/newbook/400/600',
pages: isEditing ? existing.pages : 0,
language: isEditing ? existing.language : 'Russian',
publishedYear: isEditing ? existing.publishedYear : DateTime.now().year,
rating: isEditing ? existing.rating : 5.0,
status: isEditing ? existing.status : 'want_to_read',
progress: isEditing ? existing.progress : null,
isFavorite: isEditing ? existing.isFavorite : false,
);
if (isEditing) {
onUpdateBook(book);
} else {
onAddBook(book);
}
emit(state.copyWith(isSaved: true));
}
}

View File

@@ -0,0 +1,37 @@
import '../../models/models.dart';
sealed class AddBookEvent {}
class InitializeForm extends AddBookEvent {
final Book? editBook;
final Book? prefilledData;
InitializeForm({this.editBook, this.prefilledData});
}
class UpdateTitle extends AddBookEvent {
final String title;
UpdateTitle(this.title);
}
class UpdateAuthor extends AddBookEvent {
final String author;
UpdateAuthor(this.author);
}
class UpdateAnnotation extends AddBookEvent {
final String annotation;
UpdateAnnotation(this.annotation);
}
class UpdateGenre extends AddBookEvent {
final String genre;
UpdateGenre(this.genre);
}
class ApplyScannedBook extends AddBookEvent {
final Book scannedBook;
ApplyScannedBook(this.scannedBook);
}
class SaveBook extends AddBookEvent {}

View File

@@ -0,0 +1,39 @@
import '../../models/models.dart';
class AddBookState {
final String title;
final String author;
final String annotation;
final String genre;
final Book? editBook;
final bool isSaved;
const AddBookState({
this.title = '',
this.author = '',
this.annotation = '',
this.genre = 'fiction',
this.editBook,
this.isSaved = false,
});
bool get isEditing => editBook != null;
AddBookState copyWith({
String? title,
String? author,
String? annotation,
String? genre,
Book? editBook,
bool? isSaved,
}) {
return AddBookState(
title: title ?? this.title,
author: author ?? this.author,
annotation: annotation ?? this.annotation,
genre: genre ?? this.genre,
editBook: editBook ?? this.editBook,
isSaved: isSaved ?? this.isSaved,
);
}
}

View File

@@ -0,0 +1,47 @@
import 'package:flutter_bloc/flutter_bloc.dart';
import '../../constants/constants.dart';
import 'book_event.dart';
import 'book_state.dart';
class BookBloc extends Bloc<BookEvent, BookState> {
BookBloc() : super(const BookState(books: initialBooks)) {
on<AddBook>((event, emit) {
emit(BookState(books: [...state.books, event.book]));
});
on<UpdateBook>((event, emit) {
final updated = state.books.map((b) {
return b.id == event.book.id ? event.book : b;
}).toList();
emit(BookState(books: updated));
});
on<DeleteBook>((event, emit) {
emit(
BookState(books: state.books.where((b) => b.id != event.id).toList()),
);
});
on<ToggleFavorite>((event, emit) {
final updated = state.books.map((b) {
if (b.id != event.id) return b;
return (
id: b.id,
title: b.title,
author: b.author,
genre: b.genre,
annotation: b.annotation,
coverUrl: b.coverUrl,
pages: b.pages,
language: b.language,
publishedYear: b.publishedYear,
rating: b.rating,
status: b.status,
progress: b.progress,
isFavorite: !b.isFavorite,
);
}).toList();
emit(BookState(books: updated));
});
}
}

View File

@@ -0,0 +1,23 @@
import '../../models/models.dart';
sealed class BookEvent {}
class AddBook extends BookEvent {
final Book book;
AddBook(this.book);
}
class UpdateBook extends BookEvent {
final Book book;
UpdateBook(this.book);
}
class DeleteBook extends BookEvent {
final String id;
DeleteBook(this.id);
}
class ToggleFavorite extends BookEvent {
final String id;
ToggleFavorite(this.id);
}

View File

@@ -0,0 +1,6 @@
import '../../models/models.dart';
class BookState {
final List<Book> books;
const BookState({required this.books});
}

View File

@@ -0,0 +1,21 @@
import 'package:flutter_bloc/flutter_bloc.dart';
import 'library_event.dart';
import 'library_state.dart';
class LibraryBloc extends Bloc<LibraryEvent, LibraryState> {
LibraryBloc() : super(const LibraryState()) {
on<UpdateSearchQuery>(_onUpdateSearchQuery);
on<ChangeTab>(_onChangeTab);
}
void _onUpdateSearchQuery(
UpdateSearchQuery event,
Emitter<LibraryState> emit,
) {
emit(state.copyWith(searchQuery: event.query));
}
void _onChangeTab(ChangeTab event, Emitter<LibraryState> emit) {
emit(state.copyWith(tabIndex: event.tabIndex));
}
}

View File

@@ -0,0 +1,11 @@
sealed class LibraryEvent {}
class UpdateSearchQuery extends LibraryEvent {
final String query;
UpdateSearchQuery(this.query);
}
class ChangeTab extends LibraryEvent {
final int tabIndex;
ChangeTab(this.tabIndex);
}

View File

@@ -0,0 +1,13 @@
class LibraryState {
final String searchQuery;
final int tabIndex;
const LibraryState({this.searchQuery = '', this.tabIndex = 0});
LibraryState copyWith({String? searchQuery, int? tabIndex}) {
return LibraryState(
searchQuery: searchQuery ?? this.searchQuery,
tabIndex: tabIndex ?? this.tabIndex,
);
}
}

View File

@@ -0,0 +1,126 @@
import 'dart:io';
import 'package:flutter_bloc/flutter_bloc.dart';
import '../../models/models.dart';
import '../../services/camera_service.dart';
import '../../services/openai_service.dart';
import 'scanner_event.dart';
import 'scanner_state.dart';
class ScannerBloc extends Bloc<ScannerEvent, ScannerState> {
final CameraService cameraService;
ScannerBloc({required this.cameraService}) : super(const ScannerState()) {
on<InitializeCamera>(_onInitializeCamera);
on<CaptureAndAnalyze>(_onCaptureAndAnalyze);
on<SwitchCamera>(_onSwitchCamera);
on<DismissError>(_onDismissError);
}
Future<void> _onInitializeCamera(
InitializeCamera event,
Emitter<ScannerState> emit,
) async {
try {
final initialized = await cameraService.initializeCamera();
emit(
state.copyWith(
isInitialized: initialized,
hasPermissionError: !initialized,
errorMessage: initialized ? null : 'Нет доступа к камере',
),
);
} catch (e) {
emit(
state.copyWith(
hasPermissionError: true,
errorMessage: 'Ошибка инициализации камеры: $e',
),
);
}
}
Future<void> _onCaptureAndAnalyze(
CaptureAndAnalyze event,
Emitter<ScannerState> emit,
) async {
if (cameraService.controller == null) return;
emit(state.copyWith(isCapturing: true));
try {
// Capture image
final imagePath = await cameraService.captureImage();
if (imagePath == null) {
throw Exception('Не удалось сделать снимок');
}
emit(state.copyWith(isAnalyzing: true, isCapturing: false));
Book? book;
// Try OpenAI first if available
if (event.openaiApiKey != null && event.openaiApiKey!.isNotEmpty) {
print('Using OpenAI service for analysis');
final openaiService = OpenAIService(
apiKey: event.openaiApiKey!,
baseUrl: event.openaiBaseUrl,
);
book = await openaiService.analyzeBookCover(imagePath);
}
// Fall back to Gemini if OpenAI failed or is not configured
// if (book == null) {
// if (event.geminiApiKey == null || event.geminiApiKey!.isEmpty) {
// throw Exception('API ключ не настроен (ни OpenAI, ни Gemini)');
// }
// print('Using Gemini service for analysis');
// final geminiService = GeminiService(apiKey: event.geminiApiKey!);
// book = await geminiService.analyzeBookCover(imagePath);
// }
if (book == null) {
throw Exception('Не удалось распознать книгу');
}
// Clean up temporary image
try {
await File(imagePath).delete();
} catch (e) {
print('Error deleting temporary file: $e');
}
emit(
state.copyWith(
analyzedBook: book,
isAnalyzing: false,
isCapturing: false,
),
);
} catch (e) {
emit(
state.copyWith(
errorMessage: e.toString(),
isCapturing: false,
isAnalyzing: false,
),
);
}
}
Future<void> _onSwitchCamera(
SwitchCamera event,
Emitter<ScannerState> emit,
) async {
await cameraService.switchCamera();
}
void _onDismissError(DismissError event, Emitter<ScannerState> emit) {
emit(state.copyWith(clearError: true));
}
@override
Future<void> close() {
cameraService.dispose();
return super.close();
}
}

View File

@@ -0,0 +1,21 @@
import 'package:books_flutter/config/api_config.dart';
sealed class ScannerEvent {}
class InitializeCamera extends ScannerEvent {}
class CaptureAndAnalyze extends ScannerEvent {
final String? openaiApiKey;
final String openaiBaseUrl;
final String? geminiApiKey;
CaptureAndAnalyze({
this.openaiApiKey,
this.openaiBaseUrl = ApiConfig.openaiBaseUrl,
this.geminiApiKey,
});
}
class SwitchCamera extends ScannerEvent {}
class DismissError extends ScannerEvent {}

View File

@@ -0,0 +1,39 @@
import '../../models/models.dart';
class ScannerState {
final bool isInitialized;
final bool isCapturing;
final bool isAnalyzing;
final bool hasPermissionError;
final String? errorMessage;
final Book? analyzedBook;
const ScannerState({
this.isInitialized = false,
this.isCapturing = false,
this.isAnalyzing = false,
this.hasPermissionError = false,
this.errorMessage,
this.analyzedBook,
});
ScannerState copyWith({
bool? isInitialized,
bool? isCapturing,
bool? isAnalyzing,
bool? hasPermissionError,
String? errorMessage,
Book? analyzedBook,
bool clearError = false,
bool clearBook = false,
}) {
return ScannerState(
isInitialized: isInitialized ?? this.isInitialized,
isCapturing: isCapturing ?? this.isCapturing,
isAnalyzing: isAnalyzing ?? this.isAnalyzing,
hasPermissionError: hasPermissionError ?? this.hasPermissionError,
errorMessage: clearError ? null : (errorMessage ?? this.errorMessage),
analyzedBook: clearBook ? null : (analyzedBook ?? this.analyzedBook),
);
}
}

View File

@@ -0,0 +1,19 @@
/// API Configuration
///
/// Replace YOUR_GEMINI_API_KEY_HERE with your actual Google Gemini API key
/// Get your API key from: https://makersuite.google.com/app/apikey
///
/// Replace YOUR_OPENAI_API_KEY_HERE with your actual OpenAI API key
/// The default endpoint is set to http://localhost:8317/v1/chat/completions
/// You can configure your OpenAI endpoint below if needed
class ApiConfig {
// TODO: Replace with your actual Gemini API key
static const String geminiApiKey = 'YOUR_GEMINI_API_KEY_HERE';
static const String openaiApiKey = 'sk-openai-api-key';
// OpenAI API endpoint (default: http://localhost:8317/v1/chat/completions)
static const String openaiBaseUrl = 'http://192.168.102.158:8317';
static const String openaiModel = 'gemini-3-pro-image';
}

View File

@@ -0,0 +1,15 @@
typedef Book = ({
String id,
String title,
String author,
String genre,
String annotation,
String? coverUrl,
int? pages,
String? language,
int? publishedYear,
double? rating,
String status,
double? progress,
bool isFavorite,
});

View File

@@ -0,0 +1,10 @@
import 'package:flutter/material.dart';
typedef Category = ({
String id,
String name,
int count,
IconData icon,
Color iconColor,
Color backgroundColor,
});

View File

@@ -0,0 +1,90 @@
import 'package:camera/camera.dart';
import 'package:permission_handler/permission_handler.dart';
class CameraService {
CameraController? _controller;
List<CameraDescription>? _cameras;
bool _isInitialized = false;
bool get isInitialized => _isInitialized;
CameraController? get controller => _controller;
Future<bool> requestPermissions() async {
final cameraStatus = await Permission.camera.request();
return cameraStatus.isGranted;
}
Future<bool> initializeCamera() async {
try {
// Request camera permissions
final hasPermission = await requestPermissions();
if (!hasPermission) {
return false;
}
// Get available cameras
_cameras = await availableCameras();
if (_cameras == null || _cameras!.isEmpty) {
return false;
}
// Initialize the back camera (first camera is usually the back one)
_controller = CameraController(
_cameras!.first,
ResolutionPreset.high,
enableAudio: false,
);
await _controller!.initialize();
_isInitialized = true;
return true;
} catch (e) {
print('Error initializing camera: $e');
return false;
}
}
Future<String?> captureImage() async {
if (_controller == null || !_isInitialized) {
print('Camera not initialized');
return null;
}
try {
final image = await _controller!.takePicture();
return image.path;
} catch (e) {
print('Error capturing image: $e');
return null;
}
}
Future<void> dispose() async {
await _controller?.dispose();
_controller = null;
_isInitialized = false;
}
Future<void> switchCamera() async {
if (_cameras == null || _cameras!.length < 2) {
return;
}
try {
final currentCameraIndex = _cameras!.indexOf(_controller!.description);
final nextCameraIndex = (currentCameraIndex + 1) % _cameras!.length;
await _controller?.dispose();
_controller = CameraController(
_cameras![nextCameraIndex],
ResolutionPreset.high,
enableAudio: false,
);
await _controller!.initialize();
} catch (e) {
print('Error switching camera: $e');
}
}
}

View File

@@ -0,0 +1,152 @@
import 'dart:io';
import 'dart:convert';
import 'package:books_flutter/config/api_config.dart';
import 'package:http/http.dart' as http;
import '../models/models.dart';
class OpenAIService {
final String apiKey;
final String baseUrl;
final String model;
late final String _endpoint;
OpenAIService({
required this.apiKey,
this.baseUrl = ApiConfig.openaiApiKey,
this.model = ApiConfig.openaiModel,
}) {
_endpoint = '$baseUrl/v1/chat/completions';
}
Future<Book?> analyzeBookCover(String imagePath) async {
try {
// Read the image file
final imageFile = File(imagePath);
final imageBytes = await imageFile.readAsBytes();
final base64Image = base64Encode(imageBytes);
// Create the prompt for book analysis
const prompt = '''
Analyze this book cover image and extract the following information in JSON format:
{
"title": "book title (required)",
"author": "author name (required)",
"genre": "fiction/fantasy/science/detective/biography/other",
"annotation": "brief description or summary if visible, otherwise generate a generic one"
}
Rules:
- Extract exact text from the cover
- If genre is unclear, choose the most appropriate one
- If annotation is not visible, create a brief generic description
- Return ONLY valid JSON, no additional text
- Ensure all required fields are present
- Return result in russian language
''';
// Create the request body for OpenAI API
final requestBody = {
'model': model, // Use the configured model
'messages': [
{
'role': 'user',
'content': [
{'type': 'text', 'text': prompt},
{
'type': 'image_url',
'image_url': {'url': 'data:image/jpeg;base64,$base64Image'},
},
],
},
],
};
// Make the API request
final response = await http.post(
Uri.parse(_endpoint),
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer $apiKey',
},
body: json.encode(requestBody),
);
if (response.statusCode != 200) {
print('OpenAI API error: ${response.statusCode}');
print('Response body: ${response.body}');
return null;
}
final responseData = json.decode(response.body);
// Extract the message content
final responseText = responseData['choices']?[0]?['message']?['content']
?.toString()
.trim();
if (responseText == null || responseText.isEmpty) {
print('Empty response from OpenAI');
return null;
}
// Extract JSON from response (handle potential markdown formatting)
String jsonString = responseText;
if (jsonString.contains('```json')) {
jsonString = jsonString.split('```json')[1].split('```')[0].trim();
} else if (jsonString.contains('```')) {
jsonString = jsonString.split('```')[1].split('```')[0].trim();
}
// Parse JSON response
final Map<String, dynamic> jsonData = json.decode(jsonString);
// Create Book object with extracted data
final Book book = (
id: DateTime.now().millisecondsSinceEpoch.toString(),
title: jsonData['title']?.toString() ?? 'Неизвестная книга',
author: jsonData['author']?.toString() ?? 'Неизвестный автор',
genre: _normalizeGenre(jsonData['genre']?.toString()),
annotation: jsonData['annotation']?.toString() ?? 'Нет описания',
coverUrl: null, // Will be set by the caller
pages: null,
language: 'Russian',
publishedYear: DateTime.now().year,
rating: 5.0,
status: 'want_to_read',
progress: null,
isFavorite: false,
);
return book;
} catch (e) {
print('Error analyzing book cover with OpenAI: $e');
return null;
}
}
String _normalizeGenre(String? genre) {
if (genre == null || genre.isEmpty) return 'other';
final normalized = genre.toLowerCase().trim();
// Map various genre names to our standard genres
final genreMap = {
'фантастика': 'fiction',
'fantasy': 'fantasy',
'фэнтези': 'fantasy',
'science': 'science',
'научпоп': 'science',
'научная': 'science',
'biography': 'biography',
'биография': 'biography',
'detective': 'detective',
'детектив': 'detective',
'роман': 'other',
'novel': 'other',
'poetry': 'other',
'поэзия': 'other',
};
return genreMap[normalized] ?? normalized;
}
}

View File

@@ -0,0 +1,99 @@
import 'package:flutter/material.dart';
import 'bottom_nav.dart';
import '../screens/library_screen.dart';
import '../screens/categories_screen.dart';
/// Shell widget with bottom navigation and nested navigators for each tab.
/// Uses IndexedStack to preserve navigation state when switching tabs.
class BottomNavShell extends StatefulWidget {
const BottomNavShell({super.key});
@override
State<BottomNavShell> createState() => _BottomNavShellState();
}
class _BottomNavShellState extends State<BottomNavShell> {
int _currentIndex = 0;
// Each tab gets its own navigator key to maintain independent navigation stacks
final _navigatorKeys = List.generate(4, (_) => GlobalKey<NavigatorState>());
@override
Widget build(BuildContext context) {
return PopScope(
canPop: false,
onPopInvokedWithResult: (didPop, result) async {
if (didPop) return;
final shouldPop = await _onWillPop();
if (shouldPop && context.mounted) {
Navigator.of(context).pop();
}
},
child: Scaffold(
body: IndexedStack(
index: _currentIndex,
children: [
_buildNavigator(0, (_) => const LibraryScreen()),
_buildNavigator(1, (_) => const CategoriesScreen()),
_buildNavigator(2, (_) => _buildPlaceholder('Избранное')),
_buildNavigator(3, (_) => _buildPlaceholder('Настройки')),
],
),
bottomNavigationBar: BottomNav(
currentIndex: _currentIndex,
onTap: _onTabTapped,
),
),
);
}
/// Builds a nested navigator for a tab
Widget _buildNavigator(int index, WidgetBuilder builder) {
return Navigator(
key: _navigatorKeys[index],
onGenerateRoute: (settings) {
return MaterialPageRoute(builder: builder, settings: settings);
},
);
}
/// Placeholder screen for tabs not yet implemented
Widget _buildPlaceholder(String title) {
return Scaffold(
appBar: AppBar(title: Text(title), automaticallyImplyLeading: false),
body: Center(
child: Text(title, style: Theme.of(context).textTheme.headlineMedium),
),
);
}
/// Handle tab selection
void _onTabTapped(int index) {
if (_currentIndex == index) {
// If tapping the current tab, pop to root of that tab's navigator
final navigator = _navigatorKeys[index].currentState;
if (navigator != null && navigator.canPop()) {
navigator.popUntil((route) => route.isFirst);
}
} else {
// Switch to the selected tab
setState(() {
_currentIndex = index;
});
}
}
/// Handle system back button
Future<bool> _onWillPop() async {
final navigator = _navigatorKeys[_currentIndex].currentState;
// If the current tab's navigator can pop, pop it
if (navigator != null && navigator.canPop()) {
navigator.pop();
return false; // Don't exit app
}
// If on root of current tab, allow app to exit
return true;
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

View File

@@ -0,0 +1,58 @@
import 'dart:io';
import 'package:flutter_test/flutter_test.dart';
import 'package:books_flutter/services/openai_service.dart';
void main() {
test('OpenAI Service - Analyze book cover', () async {
// Configure the OpenAI service
// Note: Make sure to replace with your actual API key
const apiKey = 'sk-proj-1234567890';
const baseUrl = 'http://localhost:8317';
const model =
'gemini-3-pro-image'; //'claude-sonnet-4-5-thinking'; // Model must be glm-4v for vision support
if (apiKey == 'YOUR_OPENAI_API_KEY_HERE') {
print('Please set your OpenAI API key in the test file');
return;
}
final service = OpenAIService(
apiKey: apiKey,
baseUrl: baseUrl,
model: model,
);
// Path to the sample image
const imagePath = 'samples/photo_2026-02-07_15-05-17.jpg';
// Check if the image file exists
final imageFile = File(imagePath);
if (!imageFile.existsSync()) {
print('Image file not found at: $imagePath');
return;
}
print('Analyzing book cover...');
print('Image path: $imagePath');
print('Image size: ${imageFile.lengthSync()} bytes');
// Analyze the book cover
final book = await service.analyzeBookCover(imagePath);
if (book != null) {
print('\n✅ Successfully analyzed book cover!\n');
print('Title: ${book.title}');
print('Author: ${book.author}');
print('Genre: ${book.genre}');
print('Annotation: ${book.annotation}');
print('\n');
expect(book.title, isNotEmpty);
expect(book.author, isNotEmpty);
} else {
print('\n❌ Failed to analyze book cover');
print(
'Check your API key and ensure the OpenAI server is running at $baseUrl',
);
}
});
}

View File

@@ -0,0 +1,64 @@
import 'dart:io';
import 'lib/services/openai_service.dart';
void main() async {
// Configure the OpenAI service
// Note: Replace with your actual API key
const apiKey = 'YOUR_OPENAI_API_KEY_HERE';
const baseUrl = 'http://localhost:8317';
if (apiKey == 'YOUR_OPENAI_API_KEY_HERE') {
print('❌ Please set your OpenAI API key in this file');
return;
}
final service = OpenAIService(apiKey: apiKey, baseUrl: baseUrl);
// Path to the sample image
const imagePath = 'samples/photo_2026-02-07_15-05-17.jpg';
// Check if the image file exists
final imageFile = File(imagePath);
if (!imageFile.existsSync()) {
print('❌ Image file not found at: $imagePath');
print('Current working directory: ${Directory.current.path}');
return;
}
print('========================================');
print('📖 Testing OpenAI Book Cover Analysis');
print('========================================\n');
print('Image path: $imagePath');
print('Image size: ${imageFile.lengthSync()} bytes');
print('API endpoint: $baseUrl/v1/chat/completions\n');
print('Analyzing book cover... (this may take a few seconds)\n');
// Analyze the book cover
final book = await service.analyzeBookCover(imagePath);
if (book != null) {
print('========================================');
print('✅ Successfully analyzed book cover!');
print('========================================\n');
print('📚 Book Details:');
print(' Title: ${book.title}');
print(' Author: ${book.author}');
print(' Genre: ${book.genre}');
print(' Annotation: ${book.annotation}');
print(' Language: ${book.language}');
print(' Published Year: ${book.publishedYear}');
print(' Rating: ${book.rating}');
print('\n');
print('========================================');
} else {
print('========================================');
print('❌ Failed to analyze book cover');
print('========================================\n');
print('Troubleshooting tips:');
print('1. Check your API key is correct');
print('2. Ensure the OpenAI server is running at $baseUrl');
print('3. Check the server logs for errors');
print('4. Verify the server supports vision models (gpt-4o)');
print('5. Check network connectivity\n');
}
}