其他
用 LaTeX 整理大佬的论文成果
在 LaTeX 中,tikz-qtree 宏包用于绘制树形图。它基于 Qtree 语法,该语法是一种简单而灵活的树形图绘制语法。
tikz-qtree 宏包提供了许多功能,使其成为绘制树形图的强大工具。这些功能包括:
自动调整树形图节点的位置,以避免碰撞。 支持多种树形图类型,包括二叉树、森林、层次结构等。 提供丰富的样式设置选项,可用于自定义树形图的外观。
有小伙伴想使用这个来管理文献,如下图效果:
为了绘制出上述图,我们使用tikz-tree宏包来进行绘制,我们依次按照下面几个步骤来绘制:
Step1:对于绘制单独的图,我们一如既往的使用:
\documentclass{standalone}
\usepackage{tikz}
\usetikzlibrary{trees,decorations.text,math,calc, positioning, arrows.meta}
\begin{document}
\begin{tikzpicture}
\end{tikzpicture}
\end{document}
这样绘制出来的图就无需裁剪。
Step2:插入宏包和一些颜色定义、字体设置:
\documentclass[margin=3pt,tikz]{standalone}
\usepackage{tikz-qtree}
\usetikzlibrary{trees,decorations.text,math,calc, positioning, arrows.meta}
\usepackage{ctex}
\usepackage{xcolor}
\definecolor{Red}{RGB}{190,20,42}
\definecolor{CY}{RGB}{233,246,254}
\usepackage{fontspec}
\setmainfont{Times New Roman}
\begin{document}
Step3:设置框格式:
\tikzset{
grow'=right,level distance=25mm, sibling distance =3.5mm,
execute at begin node=\strut,
every tree node/.style={%red,
draw=gray!80!black,
line width=0.6pt,
text width=2cm,
rounded corners=2pt,
anchor = west,
fill=white,
minimum width=2mm,
inner sep=1pt,
align=left,
font = {\scriptsize}},
edge from parent/.style={draw=black,
edge from parent fork right}
}
Step4:开始绘图,主要注意内容的“亲子”关系,如:
[.{Strengthen LLMsProgramming Skills(3.1)}
[.{LLM as a Strong Coder}
\node[fill=CY,text width=8cm](t1){AlphaCode (Li el al., 2022) SantaCoder (Allal et al., 2023), PolyCoder (Xu et al., 2022).CodexX (Chen et al, 2021), CodeGen (Nijkamp et al., 2022)};
]
[.{LLM as a SOTA CodeEvaluator}
\node[fill=CY,text width=8cm](t1){AutoFill (Kang et al.. 2023a) GPT-3.5Eval (Zhuo. 2023). PentestGPT (Deng et al. 2023a)SkipAnalyzer (Mohajer et al, 2023), LIBRO (Kang et al, 2023b)};
]
[.{Collaboration CodingSolves Complex Tasks}
\node[fill=CY,text width=8cm](t1){MetaGPT (Hong et al., 2023), ChatDev (Oian et al, 2023a), DyLAN (Liu et al, 2023g),Autogen (Wu et al., 2023b), Self-planning (Jiang et al., 2023)};
]
]
Step5:根据内容,重复的绘制,这里给主完整的代码:
\documentclass[margin=3pt,tikz]{standalone}
\usepackage{tikz-qtree}
\usetikzlibrary{trees,decorations.text,math,calc, positioning, arrows.meta}
\usepackage{ctex}
\usepackage{xcolor}
\definecolor{Red}{RGB}{190,20,42}
\definecolor{CY}{RGB}{233,246,254}
\usepackage{fontspec}
\setmainfont{Times New Roman}
\begin{document}
\begin{tikzpicture}
\tikzset{
grow'=right,level distance=25mm, sibling distance =3.5mm,
execute at begin node=\strut,
every tree node/.style={%red,
draw=gray!80!black,
line width=0.6pt,
text width=2cm,
rounded corners=2pt,
anchor = west,
fill=white,
minimum width=2mm,
inner sep=1pt,
align=left,
font = {\scriptsize}},
edge from parent/.style={draw=black,
edge from parent fork right}
}
%%% =======================================================
\begin{scope}[frontier/.style={sibling distance=4em,level distance = 7em}]
\Tree
[.{How Code Em- powersLLMs to \\Serve as IAs}
[.{How CodeAssists LLMs}
[.{Boost LLMsPerformance(3)}
[.{Strengthen LLMsProgramming Skills(3.1)}
[.{LLM as a Strong Coder}
\node[fill=CY,text width=8cm](t1){AlphaCode (Li el al., 2022) SantaCoder (Allal et al., 2023), PolyCoder (Xu et al., 2022).CodexX (Chen et al, 2021), CodeGen (Nijkamp et al., 2022)};
]
[.{LLM as a SOTA CodeEvaluator}
\node[fill=CY,text width=8cm](t1){AutoFill (Kang et al.. 2023a) GPT-3.5Eval (Zhuo. 2023). PentestGPT (Deng et al. 2023a)SkipAnalyzer (Mohajer et al, 2023), LIBRO (Kang et al, 2023b)};
]
[.{Collaboration CodingSolves Complex Tasks}
\node[fill=CY,text width=8cm](t1){MetaGPT (Hong et al., 2023), ChatDev (Oian et al, 2023a), DyLAN (Liu et al, 2023g),Autogen (Wu et al., 2023b), Self-planning (Jiang et al., 2023)};
]
]
%% =========================================================================
[.{Empower LLMsComplex Reasoning(3.2)}
[.{Enhancing Task Decomposition with Chainof Thought}
\node[fill=CY,text width=8cm](t1){Code Training Improves LLM CoT (Fu and Khot, 2022), When to Train LLM On Code(Ma et al, 2023a)};
]
[.{Program-of-Thought}
\node[fill=CY,text width=8cm](t1){LM Decomposers (Ye et al, 2023b) PoT (Chen et al.. 2023b). Pal (Gao et al, 2023)LM Theorem Proving (Polu and Sutskever, 2020) LM Math Solving (Drori et al. 2022)Binding LMs (Cheng et al, 2023), SelfzCoT (Lei and Deng, 2023)};
]
]
%% ========================================================================
[.{Empower LLMsComplex Reasoning(3.2)}
[.{Enhancing Task Decomposition with Chainof Thought}
\node[fill=CY,text width=8cm](t1){Code Training Improves LLM CoT (Fu and Khot, 2022), When to Train LLM On Code(Ma et al, 2023a)};
]
[.{Program-of-Thought}
\node[fill=CY,text width=8cm](t1){LM Decomposers (Ye et al, 2023b) PoT (Chen et al.. 2023b). Pal (Gao et al, 2023)LM Theorem Proving (Polu and Sutskever, 2020) LM Math Solving (Drori et al. 2022)Binding LMs (Cheng et al, 2023), SelfzCoT (Lei and Deng, 2023)};
]
]
%% ========================================================================
[.{Empower LLMsComplex Reasoning(3.2)}
[.{Texl-based Tools}
\node[fill=CY,text width=8cm](t1){TALM (Parisi et al.. 2022a), Toolformer (Schick et al., 2023) TolAlpaca (Tang et al. 2023)Gorilla (Patil et al, 2023), RestGPT (Song et al, 2023), ToolkenGPT (Hao et al, 2023)};
]
[.{Multimodality Tools}
\node[fill=CY,text width=8cm](t1){HuggingGPT (Shen et al., 2023) VISPROG(Gupta and Kembhavi, 2023),ViperGPT (Suris et al., 2023), TaskMatrix.AI (Liang et al., 2023d), VPGEN (Cho et al., 2023)};
]
]
]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[.{Boost LLMsPerformance(3)}
[.{Strengthen LLMsProgramming Skills(3.1)}
[.{LLM as a Strong Coder}
\node[fill=CY,text width=8cm](t1){AlphaCode (Li el al., 2022) SantaCoder (Allal et al., 2023), PolyCoder (Xu et al., 2022).CodexX (Chen et al, 2021), CodeGen (Nijkamp et al., 2022)};
]
[.{LLM as a SOTA CodeEvaluator}
\node[fill=CY,text width=8cm](t1){AutoFill (Kang et al.. 2023a) GPT-3.5Eval (Zhuo. 2023). PentestGPT (Deng et al. 2023a)SkipAnalyzer (Mohajer et al, 2023), LIBRO (Kang et al, 2023b)};
]
[.{Collaboration CodingSolves Complex Tasks}
\node[fill=CY,text width=8cm](t1){MetaGPT (Hong et al., 2023), ChatDev (Oian et al, 2023a), DyLAN (Liu et al, 2023g),Autogen (Wu et al., 2023b), Self-planning (Jiang et al., 2023)};
]
]
%% =========================================================================
[.{Empower LLMsComplex Reasoning(3.2)}
[.{Enhancing Task Decomposition with Chainof Thought}
\node[fill=CY,text width=8cm](t1){Code Training Improves LLM CoT (Fu and Khot, 2022), When to Train LLM On Code(Ma et al, 2023a)};
]
[.{Program-of-Thought}
\node[fill=CY,text width=8cm](t1){LM Decomposers (Ye et al, 2023b) PoT (Chen et al.. 2023b). Pal (Gao et al, 2023)LM Theorem Proving (Polu and Sutskever, 2020) LM Math Solving (Drori et al. 2022)Binding LMs (Cheng et al, 2023), SelfzCoT (Lei and Deng, 2023)};
]
]
%% ========================================================================
[.{Empower LLMsComplex Reasoning(3.2)}
[.{Enhancing Task Decomposition with Chainof Thought}
\node[fill=CY,text width=8cm](t1){Code Training Improves LLM CoT (Fu and Khot, 2022), When to Train LLM On Code(Ma et al, 2023a)};
]
[.{Program-of-Thought}
\node[fill=CY,text width=8cm](t1){LM Decomposers (Ye et al, 2023b) PoT (Chen et al.. 2023b). Pal (Gao et al, 2023)LM Theorem Proving (Polu and Sutskever, 2020) LM Math Solving (Drori et al. 2022)Binding LMs (Cheng et al, 2023), SelfzCoT (Lei and Deng, 2023)};
]
]
%% ========================================================================
[.{Empower LLMsComplex Reasoning(3.2)}
[.{Texl-based Tools}
\node[fill=CY,text width=8cm](t1){TALM (Parisi et al.. 2022a), Toolformer (Schick et al., 2023) TolAlpaca (Tang et al. 2023)Gorilla (Patil et al, 2023), RestGPT (Song et al, 2023), ToolkenGPT (Hao et al, 2023)};
]
[.{Multimodality Tools}
\node[fill=CY,text width=8cm](t1){HuggingGPT (Shen et al., 2023) VISPROG(Gupta and Kembhavi, 2023),ViperGPT (Suris et al., 2023), TaskMatrix.AI (Liang et al., 2023d), VPGEN (Cho et al., 2023)};
]
]
]
]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[.{How CodeAssists LLMs}
[.{Boost LLMsPerformance(3)}
[.{Strengthen LLMsProgramming Skills(3.1)}
[.{LLM as a Strong Coder}
\node[fill=CY,text width=8cm](t1){AlphaCode (Li el al., 2022) SantaCoder (Allal et al., 2023), PolyCoder (Xu et al., 2022).CodexX (Chen et al, 2021), CodeGen (Nijkamp et al., 2022)};
]
[.{LLM as a SOTA CodeEvaluator}
\node[fill=CY,text width=8cm](t1){AutoFill (Kang et al.. 2023a) GPT-3.5Eval (Zhuo. 2023). PentestGPT (Deng et al. 2023a)SkipAnalyzer (Mohajer et al, 2023), LIBRO (Kang et al, 2023b)};
]
[.{Collaboration CodingSolves Complex Tasks}
\node[fill=CY,text width=8cm](t1){MetaGPT (Hong et al., 2023), ChatDev (Oian et al, 2023a), DyLAN (Liu et al, 2023g),Autogen (Wu et al., 2023b), Self-planning (Jiang et al., 2023)};
]
]
%% =========================================================================
[.{Empower LLMsComplex Reasoning(3.2)}
[.{Enhancing Task Decomposition with Chainof Thought}
\node[fill=CY,text width=8cm](t1){Code Training Improves LLM CoT (Fu and Khot, 2022), When to Train LLM On Code(Ma et al, 2023a)};
]
[.{Program-of-Thought}
\node[fill=CY,text width=8cm](t1){LM Decomposers (Ye et al, 2023b) PoT (Chen et al.. 2023b). Pal (Gao et al, 2023)LM Theorem Proving (Polu and Sutskever, 2020) LM Math Solving (Drori et al. 2022)Binding LMs (Cheng et al, 2023), SelfzCoT (Lei and Deng, 2023)};
]
]
%% ========================================================================
[.{Empower LLMsComplex Reasoning(3.2)}
[.{Enhancing Task Decomposition with Chainof Thought}
\node[fill=CY,text width=8cm](t1){Code Training Improves LLM CoT (Fu and Khot, 2022), When to Train LLM On Code(Ma et al, 2023a)};
]
[.{Program-of-Thought}
\node[fill=CY,text width=8cm](t1){LM Decomposers (Ye et al, 2023b) PoT (Chen et al.. 2023b). Pal (Gao et al, 2023)LM Theorem Proving (Polu and Sutskever, 2020) LM Math Solving (Drori et al. 2022)Binding LMs (Cheng et al, 2023), SelfzCoT (Lei and Deng, 2023)};
]
]
%% ========================================================================
[.{Empower LLMsComplex Reasoning(3.2)}
[.{Texl-based Tools}
\node[fill=CY,text width=8cm](t1){TALM (Parisi et al.. 2022a), Toolformer (Schick et al., 2023) TolAlpaca (Tang et al. 2023)Gorilla (Patil et al, 2023), RestGPT (Song et al, 2023), ToolkenGPT (Hao et al, 2023)};
]
\node[fill=CY,text width=10.4cm](t1){HuggingGPT (Shen et al., 2023) VISPROG(Gupta and Kembhavi, 2023),ViperGPT (Suris et al., 2023), TaskMatrix.AI (Liang et al., 2023d), VPGEN (Cho et al., 2023)};
]
]
]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\node[fill=CY,text width=18cm](f9){LM Decomposers (Ye et al, 2023b) PoT (Chen et al.. 2023b). Pal (Gao et al,
2023)LM Theorem Proving (Polu and Sutskever, 2020) LM Math Solving (Drori
et al. 2022)Binding LMs (Cheng et al, 2023), SelfzCoT (Lei and Deng, 2023)};
]
\end{scope}
\end{tikzpicture}
\end{document}
选自:https://zhuanlan.zhihu.com/p/677351388
获取下载文件