NN Basic

Env # install IDE Anaconda Venv conda env list conda create --name <venv_name> python==<version> conda activate <venv_name> conda list conda deactivate # install Nvidia GetForce Driver nvidia-smi # install by pip or conda, attention to version alignment # torch from https://pytorch.org/ based on CUDA_VERSION # Sci-computer Lib, such as torchsummary, pandas,matplotlib, scikit-learn... # transformers datasets accelerate # bak conda create --name <venv_name>_bak --clone <venv_name> conda env remove --name <venv_name>_bak # proxy vim ~/.bashrc PROXY_SERVER="http://your_proxy_server_ip:port" export http_proxy="$PROXY_SERVER" export HTTP_PROXY="$http_proxy" export https_proxy="$PROXY_SERVER" export HTTPS_PROXY="$https_proxy" # export all_proxy="socks5://your_proxy_server_ip:port" # export ALL_PROXY="$all_proxy" # export no_proxy="localhost,127.0.0.1,::1,.local" # export NO_PROXY="$no_proxy" export HF_ENDPOINT=https://hf-mirror.com export HF_XET_CACHE=https://hf-mirror.com/xet source ~/.bashrc # verify # export HUGGINGFACE_TOKEN=<token> hf auth login hf auth whoami Tensor 一种专用于神经网络GPU计算的多维数组结构 ...

July 15, 2025 · 20 min · biglonglong

LLM Basic

GPUs 大模型配置硬件参考自查表 - AI全书大模型配置硬件参考自查表 - AI全书 参数规模:模型参数数量,以十亿(B)为单位,该单位大小与 GB 近似 轻量级(1-7B):适合个人电脑 中量级(14-32B):需要高性能显卡 重量级(70B+):需专业服务器 数据位宽:模型参数精度,权衡训练速度和显卡资源 ...

August 14, 2025 · 24 min · biglonglong

RL Fundamental

Basic Concepts 与监督学习对比,强化学习利用环境带来的奖励自收敛,无需大量标注数据,通过试错学习最优策略: State:状态($s$),环境在特定时刻的描述,代表了智能体所处的情况 ...

November 18, 2025 · 20 min · biglonglong

Agent Development

LangChain 构建在 LangGraph 运行时之上,Agent系统(模型+提示词+工具+中间件+记忆),LangSmith 用于可观测性 LangChain 1.0 Agents 文档 LangChain API 参考 多模型切换 from dotenv import load_dotenv load_dotenv() def get_default_model(): if os.getenv("OPENAI_API_KEY"): return "gpt-4-turbo" elif os.getenv("GOOGLE_API_KEY"): return "google:gemini-1.5-flash" elif os.getenv("ANTHROPIC_API_KEY"): return "anthropic:claude-sonnet-4-5" elif os.getenv("GROQ_API_KEY"): return "groq:llama-3.3-70b-versatile" else: raise ValueError("No API key found!") 基本使用 import os from dotenv import load_dotenv from langchain.chat_models import init_chat_model from langchain_core.messages import SystemMessage, HumanMessage, AIMessage load_dotenv() model = init_chat_model( model_provider="openai", base_url="https://api.siliconflow.cn/v1/", model="deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", api_key=os.getenv("OPENAI_API_KEY"), temperature=0.7, max_tokens=100, ) # messages = [ # {"role": "system", "content": "you are a helpful assistant."}, # {"role": "user", "content": "Hello! How are you?"}, # ] messages = [ SystemMessage(content="you are a helpful assistant."), HumanMessage(content="Hello! How are you?"), ] # print("messages:", messages) print(messages[-1].type, ": ", messages[-1].content) # sync try: response = model.invoke(messages, config=None) # can be string as well messages.append(response) print(messages[-1].type, ": ", messages[-1].content) except ValueError as e: print("configuration error:", e) except ConnectionError as e: print("network error:", e) except Exception as e: print("unknown error:", type(e).__name__, ":", e) # streaming # for chunk in model.stream(messages): # print(chunk.content, end="", flush=True) 普通调用返回 finish_reason、model_name、token_usage ...

January 5, 2025 · 21 min · biglonglong