OpenArchX

A Powerful Machine Learning Framework

Build, train, and deploy machine learning models with a flexible, high-performance framework designed for researchers and developers.

import openarchx as oax
from openarchx.quantum.circuit import QuantumLayer

# Create a neural network with quantum layer
model = oax.models.Sequential([
  oax.layers.Dense(128, activation='relu', input_shape=(784,)),
  oax.layers.Dropout(0.2),
  
  # Add a quantum layer for enhanced computation
  QuantumLayer(num_qubits=4, num_params=12),
  
  oax.layers.Dense(64, activation='relu'),
  oax.layers.Dropout(0.2),
  oax.layers.Dense(10, activation='softmax')
])

# Compile with GPU acceleration
model.compile(
  optimizer=oax.optimizers.Adam(learning_rate=0.001),
  loss='categorical_crossentropy',
  metrics=['accuracy'],
  use_cuda=True
)

# Train the model
model.fit(x_train, y_train, 
          epochs=5, 
          batch_size=32,
          validation_data=(x_val, y_val))

# Save the model in native format
oax.utils.save_model(model, 'quantum_mnist_model.oaxm')
Features

Powerful Machine Learning Made Simple

OpenArchX provides a comprehensive set of tools for building and training machine learning models with ease.

High Performance

Optimized for GPU acceleration with CUDA support for fast training and inference.

Neural Networks

Build and train deep neural networks with a comprehensive set of layers and activation functions.

Python-First

Intuitive Python API designed for researchers and developers with seamless NumPy integration.

Deployment Ready

Tools for model optimization, conversion, and deployment across various platforms.

Quantum Computing

Integrate quantum circuits with classical neural networks for next-generation AI models.

Open Source

Fully open source with an active community of contributors and maintainers.

Getting Started

Start Building with OpenArchX

Get up and running with OpenArchX in minutes.

Installation

pip install openarchx

For GPU support with CUDA:

pip install openarchx[cuda]

For quantum computing features:

pip install openarchx[quantum]

Quick Example

import openarchx as oax
import numpy as np

# Create some sample data
x = np.random.rand(100, 10)
y = np.random.rand(100, 1)

# Create a simple model
model = oax.models.Sequential([
  oax.layers.Dense(32, activation='relu', input_shape=(10,)),
  oax.layers.Dense(16, activation='relu'),
  oax.layers.Dense(1)
])

# Compile and train
model.compile(optimizer='adam', loss='mse')
model.fit(x, y, epochs=10, batch_size=8)

# Make predictions
predictions = model.predict(x)
Examples

See OpenArchX in Action

Explore examples and use cases to understand the power of OpenArchX.

Quantum-Transformer Hybrid

Combine quantum computing with transformer architecture for advanced AI models.

import numpy as np
from openarchx.core.tensor import Tensor
from openarchx.layers.transformer import TransformerEncoderLayer
from openarchx.quantum.circuit import QuantumLayer, QuantumCircuit

class QuantumTransformerLayer(QuantumLayer):
    def __init__(self, num_qubits=4):
        super().__init__(num_qubits, num_params=num_qubits * 3)
        
    def build_circuit(self):
        # Apply Hadamard gates to create superposition
        for i in range(self.circuit.num_qubits):
            self.circuit.h(i)
        
        # Apply parameterized rotations
        param_idx = 0
        for i in range(self.circuit.num_qubits):
            self.circuit.rx(param_idx, i)
            param_idx += 1
            self.circuit.ry(param_idx, i)
            param_idx += 1
            self.circuit.rz(param_idx, i)
            param_idx += 1
        
        # Apply entangling CNOT gates
        for i in range(self.circuit.num_qubits - 1):
            self.circuit.cnot(i, i + 1)
View full example →

CNN with CUDA Acceleration

Build efficient CNNs with GPU acceleration for image classification.

class SmallCNN(Module):
    def __init__(self):
        super().__init__()
        # Smaller CNN for faster training
        self.conv1 = Conv2d(1, 8, kernel_size=3, padding=1)
        self.conv2 = Conv2d(8, 16, kernel_size=3, padding=1)
        self.pool = MaxPool2d(kernel_size=2)
        self.fc1 = Linear(16 * 7 * 7, 32)
        self.fc2 = Linear(32, 10)
    
    def forward(self, x):
        x = relu(self.conv1(x))
        x = self.pool(x)
        x = relu(self.conv2(x))
        x = self.pool(x)
        batch_size = x.data.shape[0]
        x = Tensor(x.data.reshape(batch_size, -1))
        x = relu(self.fc1(x))
        x = self.fc2(x)
        return x

    def cuda(self):
        """Move tensor to GPU"""
        return self.to('cuda')
View full example →

Model Saving & Framework Conversion

Save models in native format and convert between PyTorch and TensorFlow.

# Save the model
save_path = save_model(trained_model, "saved_models/simple_model.oaxm")
print(f"Model saved to {save_path}")

# Load the model
loaded_model = load_model(save_path, model_class=Sequential)

# Convert PyTorch model to .oaxm
oaxm_path = convert_from_pytorch(pt_model, "saved_models/from_pytorch.oaxm")

# Convert OpenArchX model to TensorFlow
tf_loaded = convert_to_tensorflow("saved_models/simple_model.oaxm", tf_equivalent)
View full example →

Custom Training Loop

Create flexible training loops with full control over the optimization process.

def train_epoch(model, optimizer, x_train, y_train, batch_size):
    """Train for one epoch."""
    losses = AverageMeter()
    accuracy = AverageMeter()
    num_batches = len(x_train) // batch_size
    
    for i in range(num_batches):
        start_idx = i * batch_size
        end_idx = start_idx + batch_size
        
        # Get batch
        batch_x = x_train[start_idx:end_idx]
        batch_y = y_train[start_idx:end_idx]
        
        # Convert to tensors and move to device
        x = to_device(Tensor(batch_x))
        y = to_device(Tensor(batch_y))
        
        # Forward pass
        pred = model(x)
        
        # Calculate loss
        loss = calculate_loss(pred, y)
        
        # Backward pass
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
View full example →

Ready to Get Started?

Join the community of developers and researchers using OpenArchX.