Compare commits

..

5 Commits

Author SHA1 Message Date
kelvinBen 479abe2e3b 1. 添加下载任务线程 3 years ago
kelvinBen 756025d7fd 忽略文件列表中添加xlsx文件 3 years ago
kelvinBen 4b3db5aff7 - 新增通过网络批量下载文件 3 years ago
kelvinBen acc01c3303 - 优化excle文件输出 3 years ago
kelvinBen 1f0fa65606 - 优化iOS壳检测速度 3 years ago
  1. 5
      .gitignore
  2. 142
      README.md
  3. 126
      app.py
  4. 95
      config.py
  5. 277
      libs/core/__init__.py
  6. 110
      libs/core/download.py
  7. 65
      libs/core/net.py
  8. 99
      libs/core/parses.py
  9. 455
      libs/task/android_task.py
  10. 225
      libs/task/base_task.py
  11. 80
      libs/task/download_task.py
  12. 133
      libs/task/ios_task.py
  13. 80
      libs/task/net_task.py
  14. 24
      libs/task/web_task.py
  15. 6
      requirements.txt
  16. BIN
      tools/apktool.jar
  17. BIN
      tools/unpacker/aapt.exe
  18. BIN
      tools/unpacker/adb.exe
  19. BIN
      tools/unpacker/hexl-server-arm32
  20. BIN
      tools/unpacker/hexl-server-arm64
  21. 17
      update.md

5
.gitignore vendored

@ -1,5 +1,6 @@
result_*.txt result_*.txt
result_*.xls result_*.xls
result_*.xlsx
download/ download/
history/ history/
out/ out/
@ -114,3 +115,7 @@ venv.bak/
# add # add
.idea/ .idea/
#
1.py
.vscode/

@ -1,30 +1,30 @@
![License](https://img.shields.io/badge/Version-V1.0.8-red) ![Language](https://img.shields.io/badge/Language-Python3-blue) ![License](https://img.shields.io/badge/License-GPL3.0-orange) [![HitCount](https://hits.dwyl.com/kelvinBen/kelvinBen/AppInfoScanner.svg?style=flat&show=unique)](http://hits.dwyl.com/kelvinBen/kelvinBen/AppInfoScanner) ![License](https://img.shields.io/badge/Version-V1.0.8-red) ![Language](https://img.shields.io/badge/Language-Python3-blue) ![License](https://img.shields.io/badge/License-GPL3.0-orange)
该项目目前仅仅是规划项目中的冰山一角,如果您对此项目感兴趣或者想参与后继项目的开发工作或者翻译工作中,请发送邮件至blsm@vip.qq.com说明你的能力和诉求。 该项目目前仅仅是规划项目中的冰山一角,如果您对此项目感兴趣或者想参与后继项目的开发工作或者翻译工作中,请发送邮件至blsm@vip.qq.com说明你的能力和诉求。
## AppInfoScanner ### AppInfoScanner
一款适用于以HW行动/红队/渗透测试团队为场景的移动端(Android、iOS、WEB、H5、静态网站)信息收集扫描工具,可以帮助渗透测试工程师、攻击队成员、红队成员快速收集到移动端或者静态WEB站点中关键的资产信息并提供基本的信息输出,如:Title、Domain、CDN、指纹信息、状态信息等。 一款适用于以HW行动/红队/渗透测试团队为场景的移动端(Android、iOS、WEB、H5、静态网站)信息收集扫描工具,可以帮助渗透测试工程师、攻击队成员、红队成员快速收集到移动端或者静态WEB站点中关键的资产信息并提供基本的信息输出,如:Title、Domain、CDN、指纹信息、状态信息等。
## 前言 ### 前言
- 本项目的开发者目前为个人开发者同时有自己的工作,新的功能或者需求会在闲暇时间进行开发,BUG会优先进行处理。 - 本项目的开发者目前为个人开发者同时有自己的工作,新的功能或者需求会在闲暇时间进行开发,BUG会优先进行处理。
- 如果在使用中遇到问题或者有新的需求,请在[](https://github.com/kelvinBen/AppInfoScanner/issues)提交BUG反馈,提交BUG前请先阅读最后的"常见问题"。 - 如果在使用中遇到问题或者有新的需求,请在[](https://github.com/kelvinBen/AppInfoScanner/issues)提交BUG反馈,提交BUG前请先阅读最后的"常见问题"。
- 如果您觉得这个项目对您有用,请点击本项目右上角的"star"按钮。 - 如果您觉得这个项目对您有用,请点击本项目右上角的"star"按钮。
- 如果您想持续跟进新的版本情况,请点击本项目右上角的"Watch"按钮。 - 如果您想持续跟进新的版本情况,请点击本项目右上角的"Watch"按钮。
- 如果您想参与本项目的开发,请点击本项目右上角的"Fork"按钮,否则请勿点击"Fork"按钮。 - 如果您想参与本项目的开发,请点击本项目右上角的"Fork"按钮,否则请勿点击"Fork"按钮。
## 免责声明 ### 免责声明
请勿将本项目技术或代码应用在恶意软件制作、软件著作权/知识产权盗取或不当牟利等**非法用途**中。实施上述行为或利用本项目对非自己著作权所有的程序进行数据嗅探将涉嫌违反《中华人民共和国刑法》第二百一十七条、第二百八十六条,《中华人民共和国网络安全法》《中华人民共和国计算机软件保护条例》等法律规定。本项目提及的技术仅可用于私人学习测试等合法场景中,任何不当利用该技术所造成的刑事、民事责任均与本项目作者无关。 请勿将本项目技术或代码应用在恶意软件制作、软件著作权/知识产权盗取或不当牟利等**非法用途**中。实施上述行为或利用本项目对非自己著作权所有的程序进行数据嗅探将涉嫌违反《中华人民共和国刑法》第二百一十七条、第二百八十六条,《中华人民共和国网络安全法》《中华人民共和国计算机软件保护条例》等法律规定。本项目提及的技术仅可用于私人学习测试等合法场景中,任何不当利用该技术所造成的刑事、民事责任均与本项目作者无关。
## 适用场景 ### 适用场景
- 日常渗透测试中对APP中进行关键资产信息收集,比如URL地址、IP地址、关键字等信息的采集等。 - 日常渗透测试中对APP中进行关键资产信息收集,比如URL地址、IP地址、关键字等信息的采集等。
- 大型攻防演练场景中对APP中进行关键资产信息收集,比如URL地址、IP地址、关键字等信息的采集等。 - 大型攻防演练场景中对APP中进行关键资产信息收集,比如URL地址、IP地址、关键字等信息的采集等。
- 对WEB网站源代码进行URL地址、IP地址、关键字等信息进行采集等(可以是开源的代码也可以是右击网页源代码另存为)。 - 对WEB网站源代码进行URL地址、IP地址、关键字等信息进行采集等(可以是开源的代码也可以是右击网页源代码另存为)。
- 对H5页面进行进行URL地址、IP地址、关键字等信息进行采集等。 - 对H5页面进行进行URL地址、IP地址、关键字等信息进行采集等。
- 对某个APP进行定相信息收集等 - 对某个APP进行定相信息收集等
## 功能介绍: ### 功能介绍:
- [x] 支持目录级别的批量扫描 - [x] 支持目录级别的批量扫描
- [x] 支持DEX、APK、IPA、MACH-O、HTML、JS、Smali、ELF等文件的信息收集 - [x] 支持DEX、APK、IPA、MACH-O、HTML、JS、Smali、ELF等文件的信息收集
- [x] 支持APK、IPA、H5等文件自动下载并进行一键信息收集 - [x] 支持APK、IPA、H5等文件自动下载并进行一键信息收集
@ -44,15 +44,15 @@
- [ ] 一键对APK文件进行自动修复 - [ ] 一键对APK文件进行自动修复
- [ ] 识别到壳后自动进行脱壳处理 - [ ] 识别到壳后自动进行脱壳处理
## 部分截图 ### 部分截图
![](result.png) ![](result.png)
## 环境说明 ### 环境说明
- Apk文件解析需要使用JAVA环境,JAVA版本1.8及以下 - Apk文件解析需要使用JAVA环境,JAVA版本1.8及以下
- Python3的运行环境 - Python3的运行环境
## 目录说明 ### 目录说明
``` ```
AppInfoScanner AppInfoScanner
|-- libs 程序的核心代码 |-- libs 程序的核心代码
@ -81,7 +81,7 @@ AppInfoScanner
|-- requirements.txt 程序中需要安装的依赖库 |-- requirements.txt 程序中需要安装的依赖库
|-- update.md 程序历史版本信息 |-- update.md 程序历史版本信息
``` ```
## 使用说明 ### 使用说明
1. 下载 1. 下载
``` ```
@ -100,7 +100,7 @@ AppInfoScanner
2. 安装依赖库 2. 安装依赖库
``` ```
cd AppInfoScanner cd AppInfoScanner
python -m pip install -r requirements.txt python3 -m pip install -r requirements.txt
``` ```
3. 运行(基础版) 3. 运行(基础版)
@ -108,29 +108,29 @@ AppInfoScanner
- 扫描Android应用的APK文件、DEX文件、需要下载的APK文件下载地址、保存需要扫描的文件的目录 - 扫描Android应用的APK文件、DEX文件、需要下载的APK文件下载地址、保存需要扫描的文件的目录
``` ```
python app.py android -i <Your APK File or DEX File or APK Download Url or Save File Dir> python3 app.py android -i <Your APK File or DEX File or APK Download Url or Save File Dir>
``` ```
- 扫描iOS应用的IPA文件、Mach-o文件、需要下载的IPA文件下载地址、保存需要扫描的文件目录 - 扫描iOS应用的IPA文件、Mach-o文件、需要下载的IPA文件下载地址、保存需要扫描的文件目录
``` ```
python app.py ios -i <Your IPA file or Mach-o File or IPA Download Url or Save File Dir> python3 app.py ios -i <Your IPA file or Mach-o File or IPA Download Url or Save File Dir>
``` ```
- 扫描Web站点的文件、目录、需要缓存的站点URl - 扫描Web站点的文件、目录、需要缓存的站点URl
``` ```
python app.py web -i <Your Web file or Save Web Dir or Web Cache Url> python3 app.py web -i <Your Web file or Save Web Dir or Web Cache Url>
``` ```
## 进阶操作指南 ### 进阶操作指南
### 基本命令格式 #### 基本命令格式
``` ```
python app.py [TYPE] [OPTIONS] <The URL or directory to scan> python3 app.py [TYPE] [OPTIONS] <The URL or directory to scan>
``` ```
### 符号信息说明 #### 符号信息说明
``` ```
<> 代表需要扫描的文件或者目录或者URL地址 <> 代表需要扫描的文件或者目录或者URL地址
@ -138,7 +138,7 @@ python app.py [TYPE] [OPTIONS] <The URL or directory to scan>
[] 代表需要输入的参数 [] 代表需要输入的参数
``` ```
### TYPE参数详细说明 #### TYPE参数详细说明
此参数类型对应基本命令格式中的[TYPE],目前仅支持[android/ios/web]三种类型形式,三种类型形式必须指定一个。 此参数类型对应基本命令格式中的[TYPE],目前仅支持[android/ios/web]三种类型形式,三种类型形式必须指定一个。
``` ```
@ -150,7 +150,7 @@ web: 用于扫描WEB站点或者H5相关的文件内容
支持自动根据后缀名称进行修正,即便输入的是ios,实际上-i 输入的参数的文件名为XXX.apk,则会执行android相关的扫描。 支持自动根据后缀名称进行修正,即便输入的是ios,实际上-i 输入的参数的文件名为XXX.apk,则会执行android相关的扫描。
### OPTIONS参数详细说明 #### OPTIONS参数详细说明
该参数类型对应基本命令格式中的[OPTIONS],支持多个参数共同使用 该参数类型对应基本命令格式中的[OPTIONS],支持多个参数共同使用
``` ```
@ -164,173 +164,173 @@ web: 用于扫描WEB站点或者H5相关的文件内容
-p 或者 -- package: 指定Android的APK文件或者DEX文件需要扫描的JAVA包名信息。此参数只能在android类型下使用。 -p 或者 -- package: 指定Android的APK文件或者DEX文件需要扫描的JAVA包名信息。此参数只能在android类型下使用。
``` ```
### 具体使用方法 #### 具体使用方法
#### Android相关基本操作 ##### Android相关基本操作
- 对本地APK文件进行扫描 - 对本地APK文件进行扫描
``` ```
python app.py android -i <Your apk file> python3 app.py android -i <Your apk file>
例: 例:
python app.py android -i C:\Users\Administrator\Desktop\Demo.apk python3 app.py android -i C:\Users\Administrator\Desktop\Demo.apk
``` ```
- 对本地Dex文件进行扫描 - 对本地Dex文件进行扫描
``` ```
python app.py android -i <Your DEX file> python3 app.py android -i <Your DEX file>
例: 例:
python app.py android -i C:\Users\Administrator\Desktop\Demo.dex python3 app.py android -i C:\Users\Administrator\Desktop\Demo.dex
``` ```
- 对URL地址中包含的APK文件进行扫描 - 对URL地址中包含的APK文件进行扫描
``` ```
python app.py android -i <APK Download Url> python3 app.py android -i <APK Download Url>
例: 例:
python app.py android -i "https://127.0.0.1/Demo.apk" python3 app.py android -i "https://127.0.0.1/Demo.apk"
``` ```
需要注意此处如果URL地址过长需要使用双引号(")进行包裹 需要注意此处如果URL地址过长需要使用双引号(")进行包裹
#### iOS相关基本操作 ##### iOS相关基本操作
- 对本地IPA文件进行扫描 - 对本地IPA文件进行扫描
``` ```
python app.py ios -i <Your ipa file> python3 app.py ios -i <Your ipa file>
例: 例:
python app.py ios -i "C:\Users\Administrator\Desktop\Demo.ipa" python3 app.py ios -i "C:\Users\Administrator\Desktop\Demo.ipa"
``` ```
- 对本地Macho文件进行扫描 - 对本地Macho文件进行扫描
``` ```
python app.py ios -i <Your Mach-o file> python3 app.py ios -i <Your Mach-o file>
例: 例:
python app.py ios -i "C:\Users\Administrator\Desktop\Demo\Payload\Demo.app\Demo" python3 app.py ios -i "C:\Users\Administrator\Desktop\Demo\Payload\Demo.app\Demo"
``` ```
- 对URL地址中包含的IPA文件进行扫描 - 对URL地址中包含的IPA文件进行扫描
``` ```
python app.py ios -i <IPA Download Url> python3 app.py ios -i <IPA Download Url>
例: 例:
python app.py ios -i "https://127.0.0.1/Demo.ipa" python3 app.py ios -i "https://127.0.0.1/Demo.ipa"
``` ```
需要注意此处如果URL地址过长需要使用双引号(")进行包裹,暂时不支持对Apple Store中的IPA文件进行扫描 需要注意此处如果URL地址过长需要使用双引号(")进行包裹,暂时不支持对Apple Store中的IPA文件进行扫描
#### Web相关基本操作 ##### Web相关基本操作
- 对本地WEB站点进行扫描 - 对本地WEB站点进行扫描
``` ```
python app.py web -i <Your web file> python3 app.py web -i <Your web file>
例: 例:
python app.py web -i "C:\Users\Administrator\Desktop\Demo.html" python3 app.py web -i "C:\Users\Administrator\Desktop\Demo.html"
``` ```
- 对URL地址中包含的WEB站点文件进行扫描 - 对URL地址中包含的WEB站点文件进行扫描
``` ```
python app.py web -i <Web Download Url> python3 app.py web -i <Web Download Url>
例: 例:
python app.py web -i "https://127.0.0.1/Demo.html" python3 app.py web -i "https://127.0.0.1/Demo.html"
``` ```
#### 具有共同性的操作 ##### 具有共同性的操作
以下操作均以android类型为例: 以下操作均以android类型为例:
- 对一个本地的目录进行扫描 - 对一个本地的目录进行扫描
``` ```
python app.py android -i <Your Dir> python3 app.py android -i <Your Dir>
例: 例:
python app.py android -i C:\Users\Administrator\Desktop\Demo python3 app.py android -i C:\Users\Administrator\Desktop\Demo
``` ```
- 添加临时规则或者关键字 - 添加临时规则或者关键字
``` ```
python app.py android -i <Your apk> -r <the keyword | the rules> python3 app.py android -i <Your apk> -r <the keyword | the rules>
例: 例:
添加对百度域名的扫描 添加对百度域名的扫描
python app.py android -i C:\Users\Administrator\Desktop\Demo.apk -r ".*baidu.com.*" python3 app.py android -i C:\Users\Administrator\Desktop\Demo.apk -r ".*baidu.com.*"
``` ```
- 关闭网络嗅探功能 - 关闭网络嗅探功能
``` ```
python app.py android -i <Your apk> -s python3 app.py android -i <Your apk> -s
例: 例:
python app.py android -i C:\Users\Administrator\Desktop\Demo.apk -s python3 app.py android -i C:\Users\Administrator\Desktop\Demo.apk -s
``` ```
- 忽略所有的资源文件 - 忽略所有的资源文件
``` ```
python app.py android -i <Your apk> -n python3 app.py android -i <Your apk> -n
例: 例:
python app.py android -i C:\Users\Administrator\Desktop\Demo.apk -n python3 app.py android -i C:\Users\Administrator\Desktop\Demo.apk -n
``` ```
- 关闭输出所有符合扫描规则内容的功能 - 关闭输出所有符合扫描规则内容的功能
``` ```
python app.py android -i <Your apk> -a python3 app.py android -i <Your apk> -a
例: 例:
python app.py android -i C:\Users\Administrator\Desktop\Demo.apk -a python3 app.py android -i C:\Users\Administrator\Desktop\Demo.apk -a
``` ```
- 设置并发数量 - 设置并发数量
``` ```
python app.py android -i <Your apk> -t 20 python3 app.py android -i <Your apk> -t 20
例: 例:
设置20个并发线程 设置20个并发线程
python app.py android -i C:\Users\Administrator\Desktop\Demo.apk -t 20 python3 app.py android -i C:\Users\Administrator\Desktop\Demo.apk -t 20
``` ```
- 指定结果集和缓存文件输出目录 - 指定结果集和缓存文件输出目录
``` ```
python app.py android -i <Your apk> -o <output path> python3 app.py android -i <Your apk> -o <output path>
例: 例:
比如输出到桌面的Temp目录 比如输出到桌面的Temp目录
python app.py android -i C:\Users\Administrator\Desktop\Demo.apk -o C:\Users\Administrator\Desktop\Temp python3 app.py android -i C:\Users\Administrator\Desktop\Demo.apk -o C:\Users\Administrator\Desktop\Temp
``` ```
- 对指定包名下的文件内容进行扫描,该功能仅支持android类型 - 对指定包名下的文件内容进行扫描,该功能仅支持android类型
``` ```
python app.py android -i <Your apk> -p <Java package name> python3 app.py android -i <Your apk> -p <Java package name>
例: 例:
比如需要过滤com.baidu包名下的内容 比如需要过滤com.baidu包名下的内容
python app.py android -i C:\Users\Administrator\Desktop\Demo.apk -p "com.baidu" python3 app.py android -i C:\Users\Administrator\Desktop\Demo.apk -p "com.baidu"
``` ```
## 高级版使用说明 ### 高级版使用说明
该项目中的程序仅作为一个基本的架子,会内置一些基本的规则,并不是每一个输入的内容都可以完成相关的扫描工作。所以可以根据自己的需要进行相关规则的配置,优秀的配置内容可以达到质的的效果。 该项目中的程序仅作为一个基本的架子,会内置一些基本的规则,并不是每一个输入的内容都可以完成相关的扫描工作。所以可以根据自己的需要进行相关规则的配置,优秀的配置内容可以达到质的的效果。
- 配置文件路径为 根目录下的config.py文件,即README.md的同级目录 - 配置文件路径为 根目录下的config.py文件,即README.md的同级目录
### 配置项说明 #### 配置项说明
``` ```
filter_components: 此配置项用于配置相关组件内容,包括Json组件或者XML组件等 filter_components: 此配置项用于配置相关组件内容,包括Json组件或者XML组件等
filter_strs: 用于配置需要进行扫描的文件内容,比如需要扫描端口号,则配置为:"r'.*://([\d{1,3}\.]{3}\d{1,3}).*'" filter_strs: 用于配置需要进行扫描的文件内容,比如需要扫描端口号,则配置为:"r'.*://([\d{1,3}\.]{3}\d{1,3}).*'"
@ -343,38 +343,36 @@ data: 用于配置自动下载过程中需要的请求报文体
method: 用于配置自动下载过程中需要的请求方法 method: 用于配置自动下载过程中需要的请求方法
``` ```
## 常见问题 ### 常见问题
### 1. 信息检索垃圾数据过多? #### 1. 信息检索垃圾数据过多?
``` ```
方法1: 根据实际情况调整config.py中的规则信息 方法1: 根据实际情况调整config.py中的规则信息
方法2: 忽略资源文件 方法2: 忽略资源文件
``` ```
### 2. 出现错误:Error: This application has shell, the retrieval results may not be accurate, Please remove the shell and try again! #### 2. 出现错误:Error: This application has shell, the retrieval results may not be accurate, Please remove the shell and try again!
说明需要扫描的应用存在壳,需要进行脱壳/砸壳以后才能进行扫描,目前可以结合以下工具进行脱壳/砸壳处理 说明需要扫描的应用存在壳,需要进行脱壳/砸壳以后才能进行扫描,目前可以结合以下工具进行脱壳/砸壳处理
``` ```
Android: Android:
xposed模块: dexdump xposed模块: dexdump
frida模块: FRIDA-DEXDump frida模块: FRIDA-DEXDump
无Root脱壳:blackdex
iOS: iOS:
firda模块: firda模块:
windows系统使用: frida-ipa-dump windows系统使用: frida-ipa-dump
MacOS系统使用:frida-ios-dump MacOS系统使用:frida-ios-dump
``` ```
### 3. 出现错误: File download failed! Please download the file manually and try again. #### 3. 出现错误: File download failed! Please download the file manually and try again.
文件下载失败。 文件下载失败。
``` ```
1) 请检查输入的URL地址是否正确 1) 请检查输入的URL地址是否正确
2)请检查网络是否存在问题或者在配置文件config.py中配置请求头信息(headers)、请求报文体(data)、请求方法(method)保存后重新再执行。 2)请检查网络是否存在问题或者在配置文件config.py中配置请求头信息(headers)、请求报文体(data)、请求方法(method)保存后重新再执行。
``` ```
### 4. 出现错误:Decompilation failed, please submit error information at https://github.com/kelvinBen/AppInfoScanner/issues" #### 4. 出现错误:Decompilation failed, please submit error information at https://github.com/kelvinBen/AppInfoScanner/issues"
文件反编译失败。 文件反编译失败。
@ -415,22 +413,16 @@ APP组件: fastjson com.alibaba.fastjson
**微信**:bromomo (添加好友请备注:GitHub) **微信**:bromomo (添加好友请备注:GitHub)
**微信群**:
![image](https://user-images.githubusercontent.com/19259171/177041407-66b627d7-39b5-40e7-9858-85dca5b4f958.png)
如无法加入请添加微信好友后进群。
**邮箱**:blsm@vip.qq.com **邮箱**:blsm@vip.qq.com
提交需求、提交BUG修复、技术交流、商务合作均可添加作者好友。 提交需求、提交BUG修复、技术交流、商务合作均可添加作者好友。
## Stargazers over time ## Stargazers
[![Stargazers over time](https://starchart.cc/kelvinBen/AppInfoScanner.svg)](https://starchart.cc/kelvinBen/AppInfoScanner) [![Stargazers over time](https://starchart.cc/kelvinBen/AppInfoScanner.svg)](https://starchart.cc/kelvinBen/AppInfoScanner)
## 404StarLink 2.0 - Galaxy # 404StarLink 2.0 - Galaxy
![](https://github.com/knownsec/404StarLink-Project/raw/master/logo.png) ![](https://github.com/knownsec/404StarLink-Project/raw/master/logo.png)
AppInfoScanner 是 404Team [星链计划2.0](https://github.com/knownsec/404StarLink2.0-Galaxy)中的一环,如果对AppInfoScanner 有任何疑问又或是想要找小伙伴交流,可以参考星链计划的加群方式。 AppInfoScanner 是 404Team [星链计划2.0](https://github.com/knownsec/404StarLink2.0-Galaxy)中的一环,如果对AppInfoScanner 有任何疑问又或是想要找小伙伴交流,可以参考星链计划的加群方式。
[https://github.com/knownsec/404StarLink2.0-Galaxy#community](https://github.com/knownsec/404StarLink2.0-Galaxy#community) [https://github.com/knownsec/404StarLink2.0-Galaxy#community](https://github.com/knownsec/404StarLink2.0-Galaxy#community)

126
app.py

@ -4,97 +4,75 @@
# Github: https://github.com/kelvinBen/AppInfoScanner # Github: https://github.com/kelvinBen/AppInfoScanner
import click import click
import logging
from libs.core import Bootstrapper from libs.core import Bootstrapper
from libs.task.base_task import BaseTask from libs.task.base_task import BaseTask
@click.group(help="Python script for automatically retrieving key information in app.") @click.group(help="Python script for automatically retrieving key information in app.")
def cli(): def cli():
pass try:
LOG_FORMAT = "%(message)s" # 日志格式化输出
fp = logging.FileHandler('info.log', mode='w',encoding='utf-8')
fs = logging.StreamHandler()
logging.basicConfig(level=logging.INFO, format=LOG_FORMAT, handlers=[fp, fs]) # 调用
except Exception as e:
logging.error("{}".format(e))
# 创建Android任务 # 创建Android任务
@cli.command(help="Get the key information of Android system.") @cli.command(help="Get the key information of Android system.")
@click.option("-i", "--inputs", required=True, type=str, @click.option("-i", "--inputs", required=True, type=str, help="Please enter the APK file or DEX file to be scanned or the corresponding APK download address.")
help="Please enter the APK file or DEX file to be scanned or the corresponding APK download address.") @click.option("-r", "--rules", required=False, type=str, default="", help="Please enter a rule for temporary scanning of file contents.")
@click.option("-r", "--rules", required=False, type=str, default="", @click.option("-s", "--sniffer", is_flag=True, default=False, help="Enable the network sniffer function. It is on by default.")
help="Please enter a rule for temporary scanning of file contents.") @click.option("-n", '--no-resource', is_flag=True, default=False,help="Ignore all resource files, including network sniffing. It is not enabled by default.")
@click.option("-s", "--sniffer", is_flag=True, default=False, @click.option("-a", '--all',is_flag=True, default=False,help="Output the string content that conforms to the scan rules.It is on by default.")
help="Enable the network sniffer function. It is on by default.") @click.option("-t", '--threads',required=False, type=int,default=10,help="Set the number of concurrency. The larger the concurrency, the faster the speed. The default value is 10.")
@click.option("-n", '--no-resource', is_flag=True, default=False, @click.option("-o", '--output',required=False, type=str,default=None,help="Specify the result set output directory.")
help="Ignore all resource files, including network sniffing. It is not enabled by default.") @click.option("-p", '--package',required=False,type=str,default="",help="Specifies the package name information that needs to be scanned.")
@click.option("-a", '--all', is_flag=True, default=False, def android(inputs: str, rules: str, sniffer: bool, no_resource:bool, all:bool, threads:int, output, package:str) -> None:
help="Output the string content that conforms to the scan rules.It is on by default.")
@click.option("-t", '--threads', required=False, type=int, default=10,
help="Set the number of concurrency. The larger the concurrency, the faster the speed. The default value is 10.") bootstrapper = Bootstrapper(rules, sniffer, threads, all ,no_resource)
@click.option("-o", '--output', required=False, type=str, default=None, help="Specify the result set output directory.") bootstrapper.init_dir(__file__, output)
@click.option("-p", '--package', required=False, type=str, default="",
help="Specifies the package name information that needs to be scanned.") BaseTask().start("Android", inputs, package)
def android(inputs: str, rules: str, sniffer: bool, no_resource: bool, all: bool, threads: int, output,
package: str) -> None:
try:
bootstrapper = Bootstrapper(__file__, output, all, no_resource)
bootstrapper.init()
BaseTask("Android", inputs, rules, sniffer, threads, package).start()
except Exception as e:
raise e
@cli.command(help="Get the key information of iOS system.") @cli.command(help="Get the key information of iOS system.")
@click.option("-i", "--inputs", required=True, type=str, @click.option("-i", "--inputs", required=True, type=str, help="Please enter IPA file or ELF file to scan or corresponding IPA download address. App store is not supported at present.")
help="Please enter IPA file or ELF file to scan or corresponding IPA download address. App store is not supported at present.") @click.option("-r", "--rules", required=False, type=str, default="", help="Please enter a rule for temporary scanning of file contents.")
@click.option("-r", "--rules", required=False, type=str, default="", @click.option("-s", "--sniffer", is_flag=True, default=False, help="Enable the network sniffer function. It is on by default.")
help="Please enter a rule for temporary scanning of file contents.") @click.option("-n", '--no-resource', is_flag=True, default=False,help="Ignore all resource files, including network sniffing. It is not enabled by default.")
@click.option("-s", "--sniffer", is_flag=True, default=False, @click.option("-a", '--all',is_flag=True, default=False,help="Output the string content that conforms to the scan rules.It is on by default.")
help="Enable the network sniffer function. It is on by default.") @click.option("-t", '--threads',required=False, type=int,default=10,help="Set the number of concurrency. The larger the concurrency, the faster the speed. The default value is 10.")
@click.option("-n", '--no-resource', is_flag=True, default=False, @click.option("-o", '--output',required=False, type=str,default=None,help="Specify the result set output directory.")
help="Ignore all resource files, including network sniffing. It is not enabled by default.") def ios(inputs: str, rules: str, sniffer: bool, no_resource:bool, all:bool, threads:int, output:str) -> None:
@click.option("-a", '--all', is_flag=True, default=False,
help="Output the string content that conforms to the scan rules.It is on by default.")
@click.option("-t", '--threads', required=False, type=int, default=10,
help="Set the number of concurrency. The larger the concurrency, the faster the speed. The default value is 10.") bootstrapper = Bootstrapper(rules, sniffer, threads, all ,no_resource)
@click.option("-o", '--output', required=False, type=str, default=None, help="Specify the result set output directory.") bootstrapper.init_dir(__file__, output)
def ios(inputs: str, rules: str, sniffer: bool, no_resource: bool, all: bool, threads: int, output: str) -> None: BaseTask().start("iOS", inputs)
try:
bootstrapper = Bootstrapper(__file__, output, all, no_resource)
bootstrapper.init()
BaseTask("iOS", inputs, rules, sniffer, threads).start()
except Exception as e:
raise e
@cli.command(help="Get the key information of Web system.") @cli.command(help="Get the key information of Web system.")
@click.option("-i", "--inputs", required=True, type=str, @click.option("-i", "--inputs", required=True, type=str, help="Please enter the site directory or site file to scan or the corresponding site download address.")
help="Please enter the site directory or site file to scan or the corresponding site download address.") @click.option("-r", "--rules", required=False, type=str, default="", help="Please enter a rule for temporary scanning of file contents.")
@click.option("-r", "--rules", required=False, type=str, default="", @click.option("-s", "--sniffer", is_flag=True, default=False, help="Enable the network sniffer function. It is on by default.")
help="Please enter a rule for temporary scanning of file contents.") @click.option("-n", '--no-resource', is_flag=True, default=False,help="Ignore all resource files, including network sniffing. It is not enabled by default.")
@click.option("-s", "--sniffer", is_flag=True, default=False, @click.option("-a", '--all',is_flag=True, default=False,help="Output the string content that conforms to the scan rules.It is on by default.")
help="Enable the network sniffer function. It is on by default.") @click.option("-t", '--threads',required=False, type=int,default=10,help="Set the number of concurrency. The larger the concurrency, the faster the speed. The default value is 10.")
@click.option("-n", '--no-resource', is_flag=True, default=False, @click.option("-o", '--output',required=False, type=str,default=None,help="Specify the result set output directory.")
help="Ignore all resource files, including network sniffing. It is not enabled by default.") def web(inputs: str, rules: str, sniffer: bool, no_resource:bool, all:bool, threads:int, output:str) -> None:
@click.option("-a", '--all', is_flag=True, default=False,
help="Output the string content that conforms to the scan rules.It is on by default.")
@click.option("-t", '--threads', required=False, type=int, default=10, bootstrapper = Bootstrapper(rules, sniffer, threads, all ,no_resource)
help="Set the number of concurrency. The larger the concurrency, the faster the speed. The default value is 10.") bootstrapper.init_dir(__file__, output)
@click.option("-o", '--output', required=False, type=str, default=None, help="Specify the result set output directory.") BaseTask().start("Web", inputs)
def web(inputs: str, rules: str, sniffer: bool, no_resource: bool, all: bool, threads: int, output: str) -> None:
try:
bootstrapper = Bootstrapper(__file__, output, all, no_resource)
bootstrapper.init()
BaseTask("Web", inputs, rules, sniffer, threads).start()
except Exception as e:
raise e
def main(): def main():
cli()
cli()
if __name__ == "__main__": if __name__ == "__main__":
main() main()

@ -8,7 +8,7 @@
# com.alibaba.fastjson -> fastjson # com.alibaba.fastjson -> fastjson
# com.google.gson -> gson # com.google.gson -> gson
# com.fasterxml.jackson -> jackson # com.fasterxml.jackson -> jackson
# net.sf.json -> # net.sf.json ->
# javax.xml.parsers.DocumentBuilder -> dom方式 # javax.xml.parsers.DocumentBuilder -> dom方式
# javax.xml.parsers.SAXParser -> sax方式 # javax.xml.parsers.SAXParser -> sax方式
# org.jdom.input.SAXBuilder -> jdom # org.jdom.input.SAXBuilder -> jdom
@ -28,7 +28,7 @@ filter_components = [
# 1. https://以及http://开头的 # 1. https://以及http://开头的
# 2. IPv4的ip地址 # 2. IPv4的ip地址
# 3. URI地址,URI不能很好的拼接所以此处忽略 # 3. URI地址,URI不能很好的拼接所以此处忽略
filter_strs = [ filter_strs =[
r'https://.*|http://.*', r'https://.*|http://.*',
# r'.*://([[0-9]{1,3}\.]{3}[0-9]{1,3}).*', # r'.*://([[0-9]{1,3}\.]{3}[0-9]{1,3}).*',
r'.*://([\d{1,3}\.]{3}\d{1,3}).*', r'.*://([\d{1,3}\.]{3}\d{1,3}).*',
@ -50,80 +50,10 @@ filter_no = [
r'.*w3school.com.cn', r'.*w3school.com.cn',
r'.*apple.com', r'.*apple.com',
r'.*.amap.com', r'.*.amap.com',
r'.*slf4j.org',
] ]
# AK集合
filter_ak_map = {
"Aliyun_OSS": [
r'.*accessKeyId.*".*?"',
r'.*accessKeySecret.*".*?"',
r'.*secret.*".*?"'
],
# "Amazon_AWS_Access_Key_ID": r"([^A-Z0-9]|^)(AKIA|A3T|AGPA|AIDA|AROA|AIPA|ANPA|ANVA|ASIA)[A-Z0-9]{12,}",
# "Amazon_AWS_S3_Bucket": [
# r"//s3-[a-z0-9-]+\\.amazonaws\\.com/[a-z0-9._-]+",
# r"//s3\\.amazonaws\\.com/[a-z0-9._-]+",
# r"[a-z0-9.-]+\\.s3-[a-z0-9-]\\.amazonaws\\.com",
# r"[a-z0-9.-]+\\.s3-website[.-](eu|ap|us|ca|sa|cn)",
# r"[a-z0-9.-]+\\.s3\\.amazonaws\\.com",
# r"amzn\\.mws\\.[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"
# ],
# "Artifactory_API_Token": r"(?:\\s|=|:|\"|^)AKC[a-zA-Z0-9]{10,}",
# "Artifactory_Password": r"(?:\\s|=|:|\"|^)AP[\\dABCDEF][a-zA-Z0-9]{8,}",
# "Authorization_Basic": r"basic\\s[a-zA-Z0-9_\\-:\\.=]+",
# "Authorization_Bearer": r"bearer\\s[a-zA-Z0-9_\\-:\\.=]+",
# "AWS_API_Key": r"AKIA[0-9A-Z]{16}",
# "Basic_Auth_Credentials": r"(?<=:\/\/)[a-zA-Z0-9]+:[a-zA-Z0-9]+@[a-zA-Z0-9]+\\.[a-zA-Z]+",
# "Cloudinary_Basic_Auth": r"cloudinary:\/\/[0-9]{15}:[0-9A-Za-z]+@[a-z]+",
# "DEFCON_CTF_Flag": r"O{3}\\{.*\\}",
# "Discord_BOT_Token": r"((?:N|M|O)[a-zA-Z0-9]{23}\\.[a-zA-Z0-9-_]{6}\\.[a-zA-Z0-9-_]{27})$",
# "Facebook_Access_Token": r"EAACEdEose0cBA[0-9A-Za-z]+",
# "Facebook_ClientID": r"[f|F][a|A][c|C][e|E][b|B][o|O][o|O][k|K](.{0,20})?['\"][0-9]{13,17}",
# "Facebook_OAuth": r"[f|F][a|A][c|C][e|E][b|B][o|O][o|O][k|K].*['|\"][0-9a-f]{32}['|\"]",
# "Facebook_Secret_Key": r"([f|F][a|A][c|C][e|E][b|B][o|O][o|O][k|K]|[f|F][b|B])(.{0,20})?['\"][0-9a-f]{32}",
# "Firebase": r"[a-z0-9.-]+\\.firebaseio\\.com",
# "Generic_API_Key": r"[a|A][p|P][i|I][_]?[k|K][e|E][y|Y].*['|\"][0-9a-zA-Z]{32,45}['|\"]",
# "Generic_Secret": r"[s|S][e|E][c|C][r|R][e|E][t|T].*['|\"][0-9a-zA-Z]{32,45}['|\"]",
# "GitHub": r"[g|G][i|I][t|T][h|H][u|U][b|B].*['|\"][0-9a-zA-Z]{35,40}['|\"]",
# "GitHub_Access_Token": r"([a-zA-Z0-9_-]*:[a-zA-Z0-9_-]+@github.com*)$",
# "Google_API_Key": r"AIza[0-9A-Za-z\\-_]{35}",
# "Google_Cloud_Platform_OAuth": r"[0-9]+-[0-9A-Za-z_]{32}\\.apps\\.googleusercontent\\.com",
# "Google_Cloud_Platform_Service_Account": r"\"type\": \"service_account\"",
# "Google_OAuth_Access_Token": r"ya29\\.[0-9A-Za-z\\-_]+",
# "HackerOne_CTF_Flag": r"[h|H]1(?:[c|C][t|T][f|F])?\\{.*\\}",
# "HackTheBox_CTF_Flag": r"[h|H](?:[a|A][c|C][k|K][t|T][h|H][e|E][b|B][o|O][x|X]|[t|T][b|B])\\{.*\\}$",
# "Heroku_API_Key": r"[h|H][e|E][r|R][o|O][k|K][u|U].*[0-9A-F]{8}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{12}",
# "IP_Address": r"(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])",
# "JSON_Web_Token": r"(?i)^((?=.*[a-z])(?=.*[0-9])(?:[a-z0-9_=]+\\.){2}(?:[a-z0-9_\\-\\+\/=]*))$",
# "LinkFinder": r"(?:\"|')(((?:[a-zA-Z]{1,10}:\/\/|\/\/)[^\"'\/]{1,}\\.[a-zA-Z]{2,}[^\"']{0,})|((?:\/|\\.\\.\/|\\.\/)[^\"'><,;| *()(%%$^\/\\\\\\[\\]][^\"'><,;|()]{1,})|([a-zA-Z0-9_\\-\/]{1,}\/[a-zA-Z0-9_\\-\/]{1,}\\.(?:[a-zA-Z]{1,4}|action)(?:[\\?|#][^\"|']{0,}|))|([a-zA-Z0-9_\\-\/]{1,}\/[a-zA-Z0-9_\\-\/]{3,}(?:[\\?|#][^\"|']{0,}|))|([a-zA-Z0-9_\\-]{1,}\\.(?:php|asp|aspx|jsp|json|action|html|js|txt|xml)(?:[\\?|#][^\"|']{0,}|)))(?:\"|')",
# "Mac_Address": r"(([0-9A-Fa-f]{2}[:]){5}[0-9A-Fa-f]{2}|([0-9A-Fa-f]{2}[-]){5}[0-9A-Fa-f]{2}|([0-9A-Fa-f]{4}[\\.]){2}[0-9A-Fa-f]{4})$",
# "MailChimp_API_Key": r"[0-9a-f]{32}-us[0-9]{1,2}",
# "Mailgun_API_Key": r"key-[0-9a-zA-Z]{32}",
# "Mailto": r"(?<=mailto:)[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\\.[a-zA-Z0-9.-]+",
# "Password_in_URL": r"[a-zA-Z]{3,10}://[^/\\s:@]{3,20}:[^/\\s:@]{3,20}@.{1,100}[\"'\\s]",
# "PayPal_Braintree_Access_Token": r"access_token\\$production\\$[0-9a-z]{16}\\$[0-9a-f]{32}",
# "PGP_private_key_block": r"-----BEGIN PGP PRIVATE KEY BLOCK-----",
# "Picatic_API_Key": r"sk_live_[0-9a-z]{32}",
# "RSA_Private_Key": r"-----BEGIN RSA PRIVATE KEY-----",
# "Slack_Token": r"(xox[p|b|o|a]-[0-9]{12}-[0-9]{12}-[0-9]{12}-[a-z0-9]{32})",
# "Slack_Webhook": r"https://hooks.slack.com/services/T[a-zA-Z0-9_]{8}/B[a-zA-Z0-9_]{8}/[a-zA-Z0-9_]{24}",
# "Square_Access_Token": r"sq0atp-[0-9A-Za-z\\-_]{22}",
# "Square_OAuth_Secret": r"sq0csp-[0-9A-Za-z\\-_]{43}",
# "SSH_DSA_Private_Key": r"-----BEGIN DSA PRIVATE KEY-----",
# "SSH_EC_Private_Key": r"-----BEGIN EC PRIVATE KEY-----",
# "Stripe_API_Key": r"sk_live_[0-9a-zA-Z]{24}",
# "Stripe_Restricted_API_Key": r"rk_live_[0-9a-zA-Z]{24}",
# "TryHackMe_CTF_Flag": r"[t|T](?:[r|R][y|Y][h|H][a|A][c|C][k|K][m|M][e|E]|[h|H][m|M])\\{.*\\}$",
# "Twilio_API_Key": r"SK[0-9a-fA-F]{32}",
# "Twitter_Access_Token": r"[t|T][w|W][i|I][t|T][t|T][e|E][r|R].*[1-9][0-9]+-[0-9a-zA-Z]{40}",
# "Twitter_ClientID": r"[t|T][w|W][i|I][t|T][t|T][e|E][r|R](.{0,20})?['\"][0-9a-z]{18,25}",
# "Twitter_OAuth": r"[t|T][w|W][i|I][t|T][t|T][e|E][r|R].*['|\"][0-9a-zA-Z]{35,44}['|\"]",
# "Twitter_Secret_Key": r"[t|T][w|W][i|I][t|T][t|T][e|E][r|R](.{0,20})?['\"][0-9a-z]{35,44}"
}
# 此处配置壳信息 # 此处配置壳信息
shell_list = [ shell_list =[
'com.stub.StubApp', 'com.stub.StubApp',
's.h.e.l.l.S', 's.h.e.l.l.S',
'com.Kiwisec.KiwiSecApplication', 'com.Kiwisec.KiwiSecApplication',
@ -140,18 +70,8 @@ shell_list = [
'io.flutter.app.FlutterApplication' 'io.flutter.app.FlutterApplication'
] ]
# 此处配置Android权限信息
apk_permissions = [
'android.permission.CAMERA',
'android.permission.READ_CONTACTS',
'android.permission.READ_SMS',
'android.permission.READ_PROFILE',
'android.permission.READ_PHONE_STATE',
'android.permission.CONTROL_LOCATION_UPDATES'
]
# 此处配置需要扫描的web文件后缀 # 此处配置需要扫描的web文件后缀
web_file_suffix = [ web_file_suffix =[
"html", "html",
"js", "js",
"xml", "xml",
@ -164,7 +84,7 @@ web_file_suffix = [
] ]
# 配置需要忽略网络嗅探的文件后缀名,此处根据具体需求进行配置,默认为不过滤 # 配置需要忽略网络嗅探的文件后缀名,此处根据具体需求进行配置,默认为不过滤
sniffer_filter = [ sniffer_filter=[
"jpg", "jpg",
"png", "png",
"jpeg", "jpeg",
@ -173,8 +93,8 @@ sniffer_filter = [
# 配置自动下载Apk文件或者缓存HTML的请求头信息 # 配置自动下载Apk文件或者缓存HTML的请求头信息
headers = { headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0", "User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0",
"Connection": "close" "Connection":"close"
} }
# 配置自动下载Apk文件或者缓存HTML的请求体信息 # 配置自动下载Apk文件或者缓存HTML的请求体信息
@ -184,3 +104,4 @@ data = {
# 配置自动下载Apk文件或者缓存HTML的请求方法信息,目前仅支持GET和POST # 配置自动下载Apk文件或者缓存HTML的请求方法信息,目前仅支持GET和POST
method = "GET" method = "GET"

@ -2,145 +2,192 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Author: kelvinBen # Author: kelvinBen
# Github: https://github.com/kelvinBen/AppInfoScanner # Github: https://github.com/kelvinBen/AppInfoScanner
import os import os
import time
import shutil import shutil
import platform import platform
import logging
backsmali_file = None
apktool_file = None
strings_file = None
app_history_file= None
domain_history_file = None
result_dir = None
download_dir = None
decode_dir = None
user_add_rules = None
# smali 所在路径
smali_path = ""
# backsmli 所在路径
backsmali_path = ""
# apktool 所在路径
apktool_path = ""
# adb 所在路径
adb_path = ""
# frida server 所在路径
frida32_path = ""
frida64_path = ""
# aapt 所在路径
aapt_apth = ""
# 系统类型
os_type = ""
# 输出路径
output_path = ""
# 下载完成标记
download_flag = False download_flag = False
net_sniffer_flag = False
all_string_out = False
no_resource_flag = False
# excel 起始行号
excel_row = 1 excel_row = 1
threads_num = 10
class Bootstrapper(object): class Bootstrapper(object):
def __init__(self, path, out_path, all=False, no_resource=False): def __init__(self, rules, threads, sniffer, all, no_resource):
global smali_path # backsmali 加载路径
global backsmali_path global backsmali_file
global apktool_path # apktool 加载路径
global adb_path global apktool_file
global frida32_path # string 加载路径
global frida64_path global strings_file
global aapt_apth # App 扫描历史文件
global os_type global app_history_file
global output_path # 域名 扫描历史文件
global script_root_dir global domain_history_file
global txt_result_path # 结果输出目录
global xls_result_path global result_dir
global strings_path # 临时文件下载目录
global history_path global download_dir
global app_history_path # 临时反编译目录
global domain_history_path global decode_dir
global excel_row # 下载完成标记
global download_path
global download_flag global download_flag
global out_dir # excel 行号
global all_flag global excel_row
global resource_flag # 用户自定义规则
global user_add_rules
all_flag = not all # 用户指定线程数
resource_flag = no_resource global threads_num
# 网络嗅探标记
create_time = time.strftime("%Y%m%d%H%M%S", time.localtime()) global net_sniffer_flag
script_root_dir = os.path.dirname(os.path.abspath(path)) # 输出所有字符传
if out_path: global all_string_out
out_dir = out_path # 忽略资源标记
else: global no_resource_flag
out_dir = script_root_dir
tools_dir = os.path.join(script_root_dir, "tools") user_add_rules = rules
output_path = os.path.join(out_dir, "out") threads_num = threads
history_path = os.path.join(script_root_dir, "history") net_sniffer_flag = not sniffer
all_string_out = all
no_resource_flag = no_resource
# 需要创建的目录列表
self.__create_dir_list__= []
# 需要删除目录的列表
self.__remove_dir_list__= []
logging.info("[*] System env: {}".format(platform.system()))
def init_dir(self, app_input_path, user_out_path):
logging.info("[*] init dir...")
# 脚本执行目录
script_root_dir = os.path.dirname(os.path.abspath(app_input_path))
# 加载集成的工具
self.__tools_loading__(script_root_dir)
# 构建持久化目录
self.__build_persistent_path__(script_root_dir)
# 构建结果输出目录
self.__build_result_out__path__(script_root_dir,user_out_path)
# 统一目录构建中心
self.__build_dir__()
# 加载集成的工具
def __tools_loading__(self,script_root_dir):
tools_dir = os.path.join(script_root_dir,"tools")
backsmali_file = os.path.join(tools_dir,"baksmali.jar")
logging.info("[*] Backsmali Path: {}".format(backsmali_file))
apktool_file = os.path.join(tools_dir, "apktool.jar")
logging.info("[*] Apktool Path: {}".format(apktool_file))
if platform.system() == "Windows": if platform.system() == "Windows":
machine2bits = {'AMD64': 64, 'x86_64': 64, 'i386': 32, 'x86': 32} machine2bits = {'AMD64':64, 'x86_64': 64, 'i386': 32, 'x86': 32}
machine2bits.get(platform.machine()) machine2bits.get(platform.machine())
if platform.machine() == 'i386' or platform.machine() == 'x86': if platform.machine() == 'i386' or platform.machine() == 'x86':
strings_path = os.path.join(tools_dir, "strings.exe") strings_file = os.path.join(tools_dir,"strings.exe")
else: else:
strings_path = os.path.join(tools_dir, "strings64.exe") strings_file = os.path.join(tools_dir,"strings64.exe")
else:
strings_file ="strings"
logging.info("[*] Strings Path: {}".format(strings_file))
# 构建持久化目录
def __build_persistent_path__(self,script_root_dir):
# 当前用户文档目录
doc_path = os.path.join(os.path.expanduser("~"), 'Documents')
if os.path.exists(doc_path):
app_dir = os.path.join(doc_path,"AppInfoScanner")
else: else:
strings_path = "strings" app_dir = os.path.join(script_root_dir,"AppInfoScanner")
backsmali_path = os.path.join(tools_dir, "baksmali.jar") # 历史任务加载目录
apktool_path = os.path.join(tools_dir, "apktool.jar") history_path = os.path.join(app_dir,"history")
adb_path = os.path.join(tools_dir + '\\unpacker', "adb.exe")
frida32_path = os.path.join(tools_dir + '\\unpacker', "hexl-server-arm32") app_history_file = os.path.join(history_path,"app_history.txt")
frida64_path = os.path.join(tools_dir + '\\unpacker', "hexl-server-arm64") domain_history_file = os.path.join(history_path,"domain_history.txt")
aapt_apth = os.path.join(tools_dir + '\\unpacker', "aapt.exe")
download_path = os.path.join(out_dir, "download") self.__create_dir_list__.append(app_dir)
txt_result_path = os.path.join(out_dir, "result_" + str(create_time) + ".txt") self.__create_dir_list__.append(history_path)
xls_result_path = os.path.join(out_dir, "result_" + str(create_time) + ".xlsx")
app_history_path = os.path.join(history_path, "app_history.txt") # 构建结果输出目录
domain_history_path = os.path.join(history_path, "domain_history.txt") def __build_result_out__path__(self,script_root_dir,user_out_path):
result_out_dir = script_root_dir
def init(self):
if not os.path.exists(out_dir): # 用户指定输出目录结果则为输出到指定目录
os.makedirs(out_dir) if user_out_path:
print("[*] Create directory {}".format(out_dir)) result_out_dir = user_out_path
if os.path.exists(output_path): # 统一输出目录
try: out_dir = os.path.join(result_out_dir,"out")
shutil.rmtree(output_path) # 临时结果输出目录
except Exception as e: decode_dir = os.path.join(out_dir,"decode")
# 解决windows超长文件名删除问题 # 临时文件下载目录
if not (platform.system() == "Windows"): download_dir = os.path.join(out_dir,"download")
raise e # 最终结果输出目录
self.__removed_dirs_cmd__(output_path) result_dir = os.path.join(out_dir,"result")
os.makedirs(output_path) self.__create_dir_list__.append(out_dir)
print("[*] Create directory {}".format(output_path)) self.__create_dir_list__.append(decode_dir)
self.__create_dir_list__.append(download_dir)
if not os.path.exists(download_path): self.__create_dir_list__.append(result_dir)
os.makedirs(download_path) self.__remove_dir_list__.append(decode_dir)
print("[*] Create directory {}".format(download_path))
# 统一目录构建中心
if not os.path.exists(history_path): def __build_dir__(self):
os.makedirs(history_path) for dir_path in self.__create_dir_list__:
print("[*] Create directory {}".format(history_path)) if (os.path.exists(dir_path)) and (dir_path in self.__remove_dir_list__):
# 删除目录
if os.path.exists(txt_result_path): try:
os.remove(txt_result_path) shutil.rmtree(dir_path)
logging.info("[-] Remove Dir: {}".format(dir_path))
if os.path.exists(xls_result_path): except Exception as e:
os.remove(xls_result_path) # 解决windows超长文件名删除问题
if not (platform.system() == "Windows"):
def __removed_dirs_cmd__(self, output_path): raise e
self.__removed_dirs_cmd__(dir_path)
# 创建目录
if not os.path.exists(dir_path):
os.makedirs(dir_path)
logging.info("[+] Create Dir: {}".format(dir_path))
def __removed_dirs_cmd__(self,output_path):
files = os.listdir(output_path) files = os.listdir(output_path)
for file in files: for file in files:
new_dir = os.path.join(output_path, "newdir") new_dir = os.path.join(output_path,"newdir")
old_dir = os.path.join(output_path, file) old_dir = os.path.join(output_path,file)
if not os.path.exists(new_dir): if not os.path.exists(new_dir):
os.makedirs(new_dir) os.makedirs(new_dir)
logging.info("[+] Create Dir: {}".format(new_dir))
os.chdir(output_path) os.chdir(output_path)
cmd = ("robocopy %s %s /purge") % (new_dir, old_dir) cmd = ("robocopy %s %s /purge") % (new_dir, old_dir)
logging.debug("[*] cmd : {}".format(cmd))
os.system(cmd) os.system(cmd)
os.removedirs(new_dir) os.removedirs(new_dir)
os.removedirs(old_dir) os.removedirs(old_dir)
logging.info("[-] Remove Dir: {}".format(new_dir))
logging.info("[-] Remove Dir: {}".format(old_dir))

@ -2,67 +2,131 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Author: kelvinBen # Author: kelvinBen
# Github: https://github.com/kelvinBen/AppInfoScanner # Github: https://github.com/kelvinBen/AppInfoScanner
from genericpath import exists
import re
import os
import sys import sys
import time
import uuid
import config import config
import logging
import requests import requests
import threading import threading
import libs.core as cores import libs.core as cores
from requests.packages import urllib3 from requests.packages import urllib3
from requests.adapters import HTTPAdapter from requests.adapters import HTTPAdapter
class DownloadThreads(threading.Thread):
class DownloadThreads(threading.Thread): def __init__(self,threadID, threadName, download_file_queue, download_file_list, types):
threading.Thread.__init__(self)
def __init__(self, input_path, file_name, cache_path, types): self.threadID = threadID
threading.Thread.__init__(self) self.threadName = threadName
self.url = input_path self.download_file_queue = download_file_queue
self.download_file_list = download_file_list
self.types = types self.types = types
self.cache_path = cache_path self.cache_path = None
self.file_name = file_name
def __start__(self):
# 从队列中取数据,直到队列数据不为空为止
while not self.download_file_queue.empty():
file_or_url = self.download_file_queue.get()
if not file_or_url:
logging.error("[x] Failed to get file!")
continue
self.__auto_update_type__(file_or_url)
# 自动更新文件类型
def __auto_update_type__(self,file_or_url):
uuid_name = str(uuid.uuid1()).replace("-","")
# 文件后缀为apk 或者 类型为 Android 则自动修正为Android类型
if file_or_url.endswith("apk") or self.types == "Android":
types = "Android"
file_name = uuid_name + ".apk"
# 文件后缀为dex 或者 类型为 Android 则自动修正为Android类型
elif file_or_url.endswith("dex") or self.types == "Android":
types = "Android"
file_name = uuid_name + ".dex"
# 文件后缀为ipa 或者 类型为 iOS 则自动修正为iOS类型
elif file_or_url.endswith("ipa") or self.types == "iOS":
types = "iOS"
file_name = uuid_name + ".ipa"
else:
# 路径以http://开头或者以https://开头 且 文件是不存在的自动修正为web类型
if (file_or_url.startswith("http://") or file_or_url.startswith("https://")) and (not os.path.exists(file_or_url)):
types = "WEB"
file_name = uuid_name + ".html"
# 其他情况如:types为WEB 或者目录 或者 单独的二进制文件 等交给后面逻辑处理
if file_or_url.startswith("http://") or file_or_url.startswith("https://"):
# 进行文件下载
self.__file_deduplication__(file_name, uuid_name)
if self.cache_path:
file_path = self.cache_path
self.__download_file__(file_or_url,file_path)
#TODO 标记下载过的文件,避免重复下载
else:
types = self.types
file_path = file_or_url
self.download_file_list.append({"path": file_path, "type": types})
# 防止文件名重复导致文件被复写
def __file_deduplication__(self,file_name, uuid_name):
cache_path = os.path.join(cores.download_dir, file_name)
if not os.path.exists(cache_path):
self.cache_path = cache_path
return
new_uuid_name = str(uuid.uuid1()).replace("-","")
new_file_name = file_name.replace(uuid_name,new_uuid_name)
self.__file_deduplication__(new_file_name,new_uuid_name)
def __requset__(self): # 文件下载
def __download_file__(self, url, file_path):
try: try:
session = requests.Session() session = requests.Session()
session.mount('http://', HTTPAdapter(max_retries=3)) session.mount('http://', HTTPAdapter(max_retries=3))
session.mount('https://', HTTPAdapter(max_retries=3)) session.mount('https://', HTTPAdapter(max_retries=3))
session.keep_alive = False session.keep_alive =False
session.adapters.DEFAULT_RETRIES = 5 session.adapters.DEFAULT_RETRIES = 5
urllib3.disable_warnings() urllib3.disable_warnings()
if config.method.upper() == "POST": if config.method.upper() == "POST":
resp = session.post( resp = session.post(url=url, params=config.data, headers=config.headers, timeout=30)
url=self.url, params=config.data, headers=config.headers, timeout=30)
else: else:
resp = session.get(url=self.url, data=config.data, resp = session.get(url=url, data=config.data, headers=config.headers, timeout=30)
headers=config.headers, timeout=30)
if resp.status_code == requests.codes.ok: if resp.status_code == requests.codes.ok:
# 下载二进制文件
if self.types == "Android" or self.types == "iOS": if self.types == "Android" or self.types == "iOS":
count = 0 count = 0
progress_tmp = 0 progress_tmp = 0
length = float(resp.headers['content-length']) length = float(resp.headers['content-length'])
with open(self.cache_path, "wb") as f: with open(file_path, "wb") as f:
for chunk in resp.iter_content(chunk_size=512): for chunk in resp.iter_content(chunk_size = 512):
if chunk: if chunk:
f.write(chunk) f.write(chunk)
count += len(chunk) count += len(chunk)
progress = int(count / length * 100) progress = int(count / length * 100)
if progress != progress_tmp: if progress != progress_tmp:
progress_tmp = progress progress_tmp = progress
print("\r", end="") logging.info("\r", end="")
print( logging.info("[*] Download progress: {}%: ".format(progress), "" * (progress // 2), end="")
"[*] Download progress: {}%: ".format(progress), "" * (progress // 2), end="")
sys.stdout.flush() sys.stdout.flush()
f.close() f.close()
else: else:
html = resp.text html = resp.text
with open(self.cache_path, "w", encoding='utf-8', errors='ignore') as f: with open(file_path,"w",encoding='utf-8',errors='ignore') as f:
f.write(html) f.write(html)
f.close() f.close()
cores.download_flag = True cores.download_flag = True
else:
logging.error("[x] {} download fails, status code is {} !!!".format(url, str(resp.status_code)))
except Exception as e: except Exception as e:
raise Exception(e) logging.error("[x] {} download fails, the following exception information:".format(url))
logging.exception(e)
def run(self): def run(self):
threadLock = threading.Lock() threadLock = threading.Lock()
self.__requset__() self.__start__()

@ -4,60 +4,53 @@
# Github: https://github.com/kelvinBen/AppInfoScanner # Github: https://github.com/kelvinBen/AppInfoScanner
import re import re
import time import time
import logging
import threading import threading
import requests import requests
import libs.core as cores import libs.core as cores
class NetThreads(threading.Thread): class NetThreads(threading.Thread):
def __init__(self, threadID, name, domain_queue, worksheet): def __init__(self,threadID,name,domain_queue,worksheet):
threading.Thread.__init__(self) threading.Thread.__init__(self)
self.name = name self.name = name
self.threadID = threadID self.threadID = threadID
self.lock = threading.Lock() self.lock = threading.Lock()
self.domain_queue = domain_queue self.domain_queue = domain_queue
self.worksheet = worksheet self.worksheet = worksheet
def __get_Http_info__(self, threadLock): def __get_Http_info__(self,threadLock):
while True: while True:
if self.domain_queue.empty(): if self.domain_queue.empty():
break break
domains = self.domain_queue.get(timeout=5) domains = self.domain_queue.get(timeout=5)
domain = domains["domain"] domain = domains["domain"]
url_ip = domains["url_ip"] url_ip = domains["url_ip"]
time.sleep(2) time.sleep(2)
result = self.__get_request_result__(url_ip) result = self.__get_request_result__(url_ip)
print("[+] Processing URL address:"+url_ip) logging.info("[+] " + url_ip)
if result != "error": if result != "error":
if self.lock.acquire(True): if self.lock.acquire(True):
cores.excel_row = cores.excel_row + 1 cores.excel_row = cores.excel_row + 1
self.worksheet.cell(row=cores.excel_row, self.worksheet.cell(row=cores.excel_row, column=1).value = cores.excel_row
column=1, value=cores.excel_row-1) self.worksheet.cell(row=cores.excel_row, column=2).value = url_ip
self.worksheet.cell(row=cores.excel_row, self.worksheet.cell(row=cores.excel_row, column=3).value = domain
column=2, value=url_ip)
self.worksheet.cell(row=cores.excel_row,
column=3, value=domain)
if result != "timeout": if result != "timeout":
self.worksheet.cell( self.worksheet.cell(row=cores.excel_row, column=4).value = result["status"]
row=cores.excel_row, column=4, value=result["status"]) self.worksheet.cell(row=cores.excel_row, column=5).value = result["des_ip"]
self.worksheet.cell( self.worksheet.cell(row=cores.excel_row, column=6).value = result["server"]
row=cores.excel_row, column=5, value=result["des_ip"]) self.worksheet.cell(row=cores.excel_row, column=7).value = result["title"]
self.worksheet.cell( self.worksheet.cell(row=cores.excel_row, column=8).value = result["cdn"]
row=cores.excel_row, column=6, value=result["server"]) self.worksheet.cell(row=cores.excel_row, column=9).value = ""
self.worksheet.cell(
row=cores.excel_row, column=7, value=result["title"])
self.worksheet.cell(
row=cores.excel_row, column=8, value=result["cdn"])
self.lock.release() self.lock.release()
def __get_request_result__(self, url): def __get_request_result__(self,url):
result = {"status": "", "server": "", "cookie": "", result={"status":"","server":"","cookie":"","cdn":"","des_ip":"","sou_ip":"","title":""}
"cdn": "", "des_ip": "", "sou_ip": "", "title": ""}
cdn = "" cdn = ""
try: try:
with requests.get(url, timeout=5, stream=True) as rsp: with requests.get(url, timeout=5,stream=True) as rsp:
status_code = rsp.status_code status_code = rsp.status_code
result["status"] = status_code result["status"] = status_code
headers = rsp.headers headers = rsp.headers
@ -69,21 +62,21 @@ class NetThreads(threading.Thread):
cdn = cdn + headers['X-Via'] cdn = cdn + headers['X-Via']
if "Via" in headers: if "Via" in headers:
cdn = cdn + headers['Via'] cdn = cdn + headers['Via']
result["cdn"] = cdn result["cdn"] = cdn
sock = rsp.raw._connection.sock sock = rsp.raw._connection.sock
if sock: if sock:
des_ip = sock.getpeername()[0] des_ip = sock.getpeername()[0]
sou_ip = sock.getsockname()[0] sou_ip = sock.getsockname()[0]
if des_ip: if des_ip:
result["des_ip"] = des_ip result["des_ip"] = des_ip
if sou_ip: if sou_ip:
result["sou_ip"] = sou_ip result["sou_ip"] = sou_ip
sock.close() sock.close()
html = rsp.text html = rsp.text
title = re.findall('<title>(.+)</title>', html) title = re.findall('<title>(.+)</title>',html)
if title: if title:
result["title"] = title[0] result["title"] = title[0]
rsp.close() rsp.close()
return result return result
except requests.exceptions.InvalidURL as e: except requests.exceptions.InvalidURL as e:

@ -6,86 +6,72 @@
import re import re
import os import os
import config import config
import logging
import threading import threading
import libs.core as cores import libs.core as cores
class ParsesThreads(threading.Thread): class ParsesThreads(threading.Thread):
def __init__(self, threadID, name, file_queue, result_dict, types): def __init__(self,threadID,name,file_queue,result_dict,types):
threading.Thread.__init__(self) threading.Thread.__init__(self)
self.file_queue = file_queue self.file_queue = file_queue
self.name = name self.name = name
self.threadID = threadID self.threadID = threadID
self.result_list = [] self.result_list = []
self.result_dict = result_dict self.result_dict=result_dict
self.types = types self.types = types
def __regular_parse__(self): def __regular_parse__(self):
while True: while True:
if self.file_queue.empty(): if self.file_queue.empty():
break break
file_path = self.file_queue.get(timeout=5) file_path = self.file_queue.get(timeout = 5)
scan_str = ("[+] Scan file : %s" % file_path) scan_str = ("[+] Scan file : %s" % file_path)
if self.types == "iOS": if self.types == "iOS":
self.__get_string_by_iOS__(file_path) self.__get_string_by_iOS__(file_path)
else: else:
self.__get_string_by_file__(file_path) self.__get_string_by_file__(file_path)
result_set = set(self.result_list) result_set = set(self.result_list)
if len(result_set) != 0: if len(result_set) != 0:
self.result_dict[file_path] = result_set self.result_dict[file_path] = result_set
def __get_string_by_iOS__(self, file_path): def __get_string_by_iOS__(self,file_path):
output_path = cores.output_path output_path = cores.output_path
strings_path = cores.strings_path temp = os.path.join(output_path,"temp.txt")
temp = os.path.join(output_path, "temp.txt") cmd_str = ('"%s" "%s" > "%s"') % (str(cores.strings_file),str(file_path),str(temp))
cmd_str = ('"%s" "%s" > "%s"') % (
str(strings_path), str(file_path), str(temp))
if os.system(cmd_str) == 0: if os.system(cmd_str) == 0:
with open(temp, "r", encoding='utf-8', errors='ignore') as f: with open(temp,"r",encoding='utf-8',errors='ignore') as f:
lines = f.readlines() lines = f.readlines()
for line in lines: for line in lines:
self.__parse_string__(line) self.__parse_string__(line)
def __get_string_by_file__(self, file_path): def __get_string_by_file__(self,file_path):
with open(file_path, "r", encoding="utf8", errors='ignore') as f: with open(file_path,"r",encoding="utf8",errors='ignore') as f :
file_content = f.read() file_content = f.read()
# 获取到所有的字符串 # 获取到所有的字符串
pattern = re.compile(r'\"(.*?)\"') pattern = re.compile(r'\"(.*?)\"')
results = pattern.findall(file_content) results = pattern.findall(file_content)
# 搜素AK和SK信息,由于iOS的逻辑处理效率过慢暂时忽略对iOS的AK检测 # 搜素AK和SK信息
if not (".js" == file_path[-3:] and self.types == "iOS"): if not ".js" == file_path[-3:]:
# 未包含相关字段不进行ak或者sk信息采集 akAndSkList = re.compile(r'.*accessKeyId.*".*"|.*accessKeySecret.*".*"|.*secret.*".*"').findall(file_content)
if "access" in file_content or "secret" in file_content: for akAndSk in akAndSkList:
for key, values in config.filter_ak_map.items(): self.result_list.append(akAndSk.strip())
if isinstance(values, list): logging.info("[+] AK or SK in:",akAndSk.strip())
for value in values:
self.__ak_and_sk__(key, value, file_content)
else:
self.__ak_and_sk__(key, values, file_content)
# 遍历所有的字符串 # 遍历所有的字符串
for result in set(results): for result in set(results):
if ("http://" == result) or ("https://" == result) or result.startswith("https://.") or result.startswith("http://.") : self.__parse_string__(result)
continue
self.__parse_string__(result) def __parse_string__(self,result):
def __ak_and_sk__(self, name, ak_rule, content):
akAndSkList = re.compile(ak_rule).findall(content)
for akAndSk in akAndSkList:
ak = ("[%s]-->:%s") % (name, akAndSk.strip())
self.result_list.append(ak)
print(("[+] [%s] AK or SK in %s:") % (name, akAndSk.strip()))
def __parse_string__(self, result):
# 通过正则筛选需要过滤的字符串 # 通过正则筛选需要过滤的字符串
for filter_str in config.filter_strs: for filter_str in config.filter_strs:
filter_str_pat = re.compile(filter_str) filter_str_pat = re.compile(filter_str)
filter_resl = filter_str_pat.findall(result) filter_resl = filter_str_pat.findall(result)
# 过滤掉未搜索到的内容 # 过滤掉未搜索到的内容
if len(filter_resl) != 0: if len(filter_resl)!=0:
# 提取第一个结果 # 提取第一个结果
resl_str = filter_resl[0] resl_str = filter_resl[0]
# 过滤 # 过滤
@ -94,32 +80,29 @@ class ParsesThreads(threading.Thread):
self.threadLock.acquire() self.threadLock.acquire()
if cores.all_flag: if cores.all_flag:
print( logging.info("[+] String : {}".format(resl_str))
("[+] The string searched for matching rule is: %s") % (resl_str))
self.result_list.append(resl_str) self.result_list.append(resl_str)
self.threadLock.release() self.threadLock.release()
continue continue
def __filter__(self, resl_str): def __filter__(self,resl_str):
return_flag = 1 return_flag = 1
resl_str = resl_str.replace("\r", "").replace( resl_str = resl_str.replace("\r","").replace("\n","").replace(" ","")
"\n", "").replace(" ", "")
if len(resl_str) == 0: if len(resl_str) == 0:
return 0 return 0
for filte in set(config.filter_no): for filte in set(config.filter_no):
resl_str = resl_str.replace(filte, "") resl_str = resl_str.replace(filte,"")
if len(resl_str) == 0: if len(resl_str) == 0:
return_flag = 0 return_flag = 0
continue continue
if re.match(filte, resl_str): if re.match(filte,resl_str):
return_flag = 0 return_flag = 0
continue continue
return return_flag
return return_flag
def run(self): def run(self):
self.threadLock = threading.Lock() self.threadLock = threading.Lock()
self.__regular_parse__() self.__regular_parse__()

@ -2,490 +2,145 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Author: kelvinBen # Author: kelvinBen
# Github: https://github.com/kelvinBen/AppInfoScanner # Github: https://github.com/kelvinBen/AppInfoScanner
import json
import os import os
import re import re
import shutil
import subprocess
import config import config
import logging
import hashlib import hashlib
import zipfile
import platform
from queue import Queue from queue import Queue
import libs.core as cores import libs.core as cores
class AndroidTask(object): class AndroidTask(object):
def __init__(self, path, package): def __init__(self, file_path, package):
self.path = path self.input_file_path = file_path
self.package = package self.package = package
self.file_queue = Queue() self.file_queue = Queue()
self.shell_flag = False self.shell_flag = False
self.packagename = "" self.packagename = ""
self.comp_list = [] self.comp_list = []
self.file_identifier = [] self.file_identifier = []
self.permissions = []
self.files = []
self.protect_flag = """{
"360加固": [
"assets/.appkey",
"assets/libjiagu.so",
"libjiagu.so",
"libjiagu_art.so",
"libjiagu_x86.so",
"libprotectClass.so",
".appkey",
"1ibjgdtc.so",
"libjgdtc.so",
"libjgdtc_a64.so",
"libjgdtc_art.so",
"libjgdtc_x64.so",
"libjgdtc_x86.so",
"libjiagu_a64.so",
"libjiagu_ls.so",
"libjiagu_x64.so"
],
"APKProtect": [
"libAPKProtect.so"
],
"UU安全": [
"libuusafe.jar.so",
"libuusafe.so",
"libuusafeempty.so",
"assets/libuusafe.jar.so",
"assets/libuusafe.so",
"lib/armeabi/libuusafeempty.so"
],
"apktoolplus": [
"assets/jiagu_data.bin",
"assets/sign.bin",
"jiagu_data.bin",
"lib/armeabi/libapktoolplus_jiagu.so",
"libapktoolplus_jiagu.so",
"sign.bin"
],
"中国移动加固": [
"assets/mogosec_classes",
"assets/mogosec_data",
"assets/mogosec_dexinfo",
"assets/mogosec_march",
"ibmogosecurity.so",
"lib/armeabi/libcmvmp.so",
"lib/armeabi/libmogosec_dex.so",
"lib/armeabi/libmogosec_sodecrypt.so",
"lib/armeabi/libmogosecurity.so",
"libcmvmp.so",
"libmogosec_dex.so",
"libmogosec_sodecrypt.so",
"mogosec_classes",
"mogosec_data",
"mogosec_dexinfo",
"mogosec_march"
],
"几维安全": [
"assets/dex.dat",
"lib/armeabi/kdpdata.so",
"lib/armeabi/libkdp.so",
"lib/armeabi/libkwscmm.so",
"libkwscmm.so",
"libkwscr.so",
"libkwslinker.so"
],
"启明星辰": [
"libvenSec.so",
"libvenustech.so"
],
"网秦加固": [
"libnqshield.so"
],
"娜迦加固": [
"libchaosvmp.so",
"libddog.so",
"libfdog.so"
],
"娜迦加固(新版2022)": [
"assets/maindata/fake_classes.dex",
"lib/armeabi/libxloader.so",
"lib/armeabi-v7a/libxloader.so",
"lib/arm64-v8a/libxloader.so",
"libxloader.so"
],
"娜迦加固(企业版)": [
"libedog.so"
],
"梆梆安全(企业版)": [
"libDexHelper-x86.so",
"libDexHelper.so",
"1ibDexHelper.so"
],
"梆梆安全": [
"libSecShell.so",
"libsecexe.so",
"libsecmain.so",
"libSecShel1.so"
],
"梆梆安全(定制版)": [
"assets/classes.jar",
"lib/armeabi/DexHelper.so"
],
"梆梆安全(免费版)": [
"assets/secData0.jar",
"lib/armeabi/libSecShell-x86.so",
"lib/armeabi/libSecShell.so"
],
"海云安加固": [
"assets/itse",
"lib/armeabi/libitsec.so",
"libitsec.so"
],
"爱加密": [
"assets/af.bin",
"assets/ijiami.ajm",
"assets/ijm_lib/X86/libexec.so",
"assets/ijm_lib/armeabi/libexec.so",
"assets/signed.bin",
"ijiami.dat",
"lib/armeabi/libexecmain.so",
"libexecmain.so"
],
"爱加密企业版": [
"ijiami.ajm"
],
"珊瑚灵御": [
"assets/libreincp.so",
"assets/libreincp_x86.so",
"libreincp.so",
"libreincp_x86.so"
],
"瑞星加固": [
"librsprotect.so"
],
"百度加固": [
"libbaiduprotect.so",
"assets/baiduprotect.jar",
"assets/baiduprotect1.jar",
"baiduprotect1.jar",
"lib/armeabi/libbaiduprotect.so",
"libbaiduprotect_art.so",
"libbaiduprotect_x86.so"
],
"盛大加固": [
"libapssec.so"
],
"网易易盾": [
"libnesec.so"
],
"腾讯": [
"libexec.so",
"libshell.so"
],
"腾讯加固": [
"lib/armeabi/mix.dex",
"lib/armeabi/mixz.dex",
"lib/armeabi/libshella-xxxx.so",
"lib/armeabi/libshellx-xxxx.so",
"tencent_stub"
],
"腾讯乐固(旧版)": [
"libtup.so",
"mix.dex",
"liblegudb.so",
"libshella",
"mixz.dex",
"libshel1x"
],
"腾讯乐固": [
"libshellx"
],
"腾讯乐固(VMP)": [
"lib/arm64-v8a/libxgVipSecurity.so",
"lib/armeabi-v7a/libxgVipSecurity.so",
"libxgVipSecurity.so"
],
"腾讯云": [
"assets/libshellx-super.2021.so",
"lib/armeabi/libshell-super.2019.so",
"lib/armeabi/libshell-super.2020.so",
"lib/armeabi/libshell-super.2021.so",
"lib/armeabi/libshell-super.2022.so",
"lib/armeabi/libshell-super.2023.so",
"tencent_sub"
],
"腾讯云移动应用安全": [
"0000000lllll.dex",
"00000olllll.dex",
"000O00ll111l.dex",
"00O000ll111l.dex",
"0OO00l111l1l",
"o0oooOO0ooOo.dat"
],
"腾讯云移动应用安全(腾讯御安全)": [
"libBugly-yaq.so",
"libshell-super.2019.so",
"libshellx-super.2019.so",
"libzBugly-yaq.so",
"t86",
"tosprotection",
"tosversion",
"000000011111.dex",
"000000111111.dex",
"000001111111",
"00000o11111.dex",
"o0ooo000oo0o.dat"
],
"腾讯御安全": [
"libtosprotection.armeabi-v7a.so",
"libtosprotection.armeabi.so",
"libtosprotection.x86.so",
"assets/libtosprotection.armeabi-v7a.so",
"assets/libtosprotection.armeabi.so",
"assets/libtosprotection.x86.so",
"assets/tosversion",
"lib/armeabi/libTmsdk-xxx-mfr.so",
"lib/armeabi/libtest.so"
],
"腾讯Bugly": [
"lib/arm64-v8a/libBugly.so",
"libBugly.so"
],
"蛮犀": [
"assets/mxsafe.config",
"assets/mxsafe.data",
"assets/mxsafe.jar",
"assets/mxsafe/arm64-v8a/libdSafeShell.so",
"assets/mxsafe/x86_64/libdSafeShell.so",
"libdSafeShell.so"
],
"通付盾": [
"libNSaferOnly.so",
"libegis.so"
],
"阿里加固": [
"assets/armeabi/libfakejni.so",
"assets/armeabi/libzuma.so",
"assets/classes.dex.dat",
"assets/dp.arm-v7.so.dat",
"assets/dp.arm.so.dat",
"assets/libpreverify1.so",
"assets/libzuma.so",
"assets/libzumadata.so",
"dexprotect"
],
"阿里聚安全": [
"aliprotect.dat",
"libdemolish.so",
"libfakejni.so",
"libmobisec.so",
"libsgmain.so",
"libzuma.so",
"libzumadata.so",
"libdemolishdata.so",
"libpreverify1.so",
"libsgsecuritybody.so"
],
"顶像科技": [
"libx3g.so",
"lib/armeabi/libx3g.so"
]
}"""
def start(self): def start(self):
# 检查java环境是否存在 # 检查java环境是否存在
if os.system("java -version") != 0: if os.system("java -version") !=0 :
raise Exception("Please install the Java environment!") raise Exception("Please install the Java environment!")
# 检查Frida环境是否存在
if os.system("frida --version") != 0:
raise Exception("Please install the Frida environment!")
input_file_path = self.path input_file_path = self.path
if os.path.isdir(input_file_path): if os.path.isdir(input_file_path):
self.__decode_dir__(input_file_path) self.__decode_dir__(input_file_path)
else: else:
if self.__decode_file__(input_file_path) == "error": if self.__decode_file__(input_file_path) == "error":
raise Exception( raise Exception("Retrieval of this file type is not supported. Select APK file or DEX file.")
"Retrieval of this file type is not supported. Select APK file or DEX file.")
return {"comp_list":self.comp_list,"shell_flag":self.shell_flag,"file_queue":self.file_queue,"packagename":self.packagename,"file_identifier":self.file_identifier}
return {"comp_list": self.comp_list, "shell_flag": self.shell_flag, "file_queue": self.file_queue,
"packagename": self.packagename, "file_identifier": self.file_identifier,
"permissions": self.permissions}
def __detect_protect__(self, file_path): def __decode_file__(self,file_path):
markNameMap = json.loads(self.protect_flag)
markNameMap = dict(markNameMap)
zip_stream = zipfile.ZipFile(file_path) # 默认模式r,读
flag = ''
for zippath in zip_stream.namelist():
if 'lib' in zippath:
for key, value in markNameMap.items():
for mark in value:
if mark in zippath:
print("detect 【{}】 protector\nspecific code:{}->{}\n".format(key, zippath, mark))
flag += ("detect 【{}】 protector\nspecific code:{}->{}\n".format(key, zippath, mark))
if len(flag) > 0:
self.__android_unpack__()
# so库文件模式找不到就全量匹配
for zippath in zip_stream.namelist():
for key, value in markNameMap.items():
for mark in value:
if mark in zippath:
print("detect 【{}】 protector\nspecific code:{}->{}\n".format(key, zippath, mark))
flag += ("detect 【{}】 protector\nspecific code:{}->{}\n".format(key, zippath, mark))
if len(flag) > 0:
self.__android_unpack__()
print("We can't detect protect")
def __android_unpack__(self):
print('[*] unpacking')
cmd_str = ('%s install %s') % (str(cores.adb_path), str(self.path))
print('[*] Install the APK')
if os.system(cmd_str) == 0:
print("Push Frida Server")
cmd_str = ('%s push %s /data/local/tmp') % (str(cores.adb_path), str(cores.frida32_path))
cmd_str1 = ('%s push %s /data/local/tmp') % (str(cores.adb_path), str(cores.frida64_path))
cmd_str2 = ('%s shell su -c "chmod 777 /data/local/tmp/hexl-server-arm64"') % (str(cores.adb_path))
cmd_str3 = ('%s shell su -c "setenforce 0"') % (str(cores.adb_path))
cmd_str4 = ('%s shell su -c "./data/local/tmp/hexl-server-arm64 &"') % (str(cores.adb_path))
print("[*] Running Frida Server")
if os.system(cmd_str) == 0 and os.system(cmd_str1) == 0 and os.system(cmd_str2) == 0 \
and os.system(cmd_str3) == 0 and os.system(cmd_str4) == 0:
print("[*] Frida Server started")
else:
print("[-] Running failed, please check the error in terminal")
exit()
else:
print("[-] We can't install the APP")
exit()
get_info_command = "%s dump badging %s" % (cores.aapt_apth, self.path)
pip = os.popen(get_info_command)
output = pip.buffer.read().decode('utf-8', 'ignore')
if output == "":
raise Exception("can't get the app info")
match = re.compile("package: name='(\S+)'").match(
output) # 通过正则匹配,获取包名
print(match.group(1))
cmd_str = ('frida-dexdump -U -f %s') % (str(match.group(1)))
if os.system(cmd_str) != 0:
print("An error occurred in the unpack")
exit()
def __decode_file__(self, file_path):
apktool_path = str(cores.apktool_path)
backsmali_path = str(cores.backsmali_path)
base_out_path = str(cores.output_path) base_out_path = str(cores.output_path)
filename = os.path.basename(file_path) filename = os.path.basename(file_path)
suffix_name = filename.split(".")[-1] suffix_name = filename.split(".")[-1]
if suffix_name == "apk": if suffix_name == "apk":
self.__detect_protect__(file_path)
if suffix_name == "apk" or suffix_name == "hpk":
name = filename.split(".")[0] name = filename.split(".")[0]
output_path = os.path.join(base_out_path, name) output_path = os.path.join(base_out_path,name)
self.__decode_apk__(file_path, apktool_path, output_path) self.__decode_apk__(file_path,output_path)
elif suffix_name == "dex": elif suffix_name == "dex":
f = open(file_path, 'rb') f = open(file_path,'rb')
md5_obj = hashlib.md5() md5_obj = hashlib.md5()
while True: while True:
r = f.read(1024) r = f.read(1024)
if not r: if not r:
break break
md5_obj.update(r) md5_obj.update(r)
dex_md5 = md5_obj.hexdigest().lower() dex_md5 = md5_obj.hexdigest().lower()
self.file_identifier.append(dex_md5) self.file_identifier.append(dex_md5)
output_path = os.path.join(base_out_path, dex_md5) output_path = os.path.join(base_out_path,dex_md5)
if not os.path.exists(output_path): if not os.path.exists(output_path):
os.makedirs(output_path) os.makedirs(output_path)
self.__decode_dex__(file_path, backsmali_path, output_path) self.__decode_dex__(file_path,output_path)
else: else:
return "error" return "error"
def __decode_dir__(self, root_dir): def __decode_dir__(self,root_dir):
dir_or_files = os.listdir(root_dir) dir_or_files = os.listdir(root_dir)
for dir_or_file in dir_or_files: for dir_or_file in dir_or_files:
dir_or_file_path = os.path.join(root_dir, dir_or_file) dir_or_file_path = os.path.join(root_dir,dir_or_file)
if os.path.isdir(dir_or_file_path): if os.path.isdir(dir_or_file_path):
self.__decode_dir__(dir_or_file_path) self.__decode_dir__(dir_or_file_path)
else: else:
if self.__decode_file__(dir_or_file_path) == "error": if self.__decode_file__(dir_or_file_path) == "error":
continue continue
# 分解apk # 分解apk
def __decode_apk__(self, file_path, apktool_path, output_path): def __decode_apk__(self,file_path,output_path):
cmd_str = ('java -jar "%s" d -f "%s" -o "%s" --only-main-classe') % ( cmd_str = ('java -jar "%s" d -f "%s" -o "%s" --only-main-classe') % (cores.apktool_file,str(file_path),str(output_path))
str(apktool_path), str(file_path), str(output_path)) logging.debug("[*] cmd {}".format(cmd_str))
if os.system(cmd_str) == 0: if os.system(cmd_str) == 0:
self.__shell_test__(output_path) self.__shell_test__(output_path)
self.__scanner_file_by_apktool__(output_path) self.__scanner_file_by_apktool__(output_path)
else: else:
print( logging.error("[x] Decompilation failed, please submit error information at https://github.com/kelvinBen/AppInfoScanner/issues")
"[-] Decompilation failed, please submit error information at https://github.com/kelvinBen/AppInfoScanner/issues")
raise Exception(file_path + ", Decompilation failed.") raise Exception(file_path + ", Decompilation failed.")
# 分解dex # 分解dex
def __decode_dex__(self, file_path, backsmali_path, output_path): def __decode_dex__(self,file_path,output_path):
cmd_str = ('java -jar "%s" d "%s"') % (str(backsmali_path), cmd_str = ('java -jar "%s" d "%s"') % (cores.backsmali_file,str(file_path))
str(file_path)) logging.debug("[*] cmd {}".format(cmd_str))
if os.system(cmd_str) == 0: if os.system(cmd_str) == 0:
self.__get_scanner_file__(output_path) self.__get_scanner_file__(output_path)
else: else:
print( logging.error("[x] Decompilation failed, please submit error information at https://github.com/kelvinBen/AppInfoScanner/issues")
"[-] Decompilation failed, please submit error information at https://github.com/kelvinBen/AppInfoScanner/issues")
raise Exception(file_path + ", Decompilation failed.") raise Exception(file_path + ", Decompilation failed.")
# 初始化检测文件信息 # 初始化检测文件信息
def __scanner_file_by_apktool__(self, output_path): def __scanner_file_by_apktool__(self,output_path):
file_names = os.listdir(output_path) file_names = os.listdir(output_path)
for file_name in file_names: for file_name in file_names:
file_path = os.path.join(output_path, file_name) file_path = os.path.join(output_path,file_name)
if not os.path.isdir(file_path): if not os.path.isdir(file_path):
continue continue
if "smali" in file_name or "assets" in file_name: if "smali" in file_name or "assets" in file_name:
scanner_file_suffixs = ["smali", "js", "xml"] scanner_file_suffixs = ["smali","js","xml"]
if cores.resource_flag: if cores.resource_flag:
scanner_file_suffixs = ["smali"] scanner_file_suffixs =["smali"]
self.__get_scanner_file__(file_path, scanner_file_suffixs) self.__get_scanner_file__(file_path,scanner_file_suffixs)
def __get_scanner_file__(self, scanner_dir, scanner_file_suffixs=["smali"]): def __get_scanner_file__(self,scanner_dir,scanner_file_suffixs=["smali"]):
dir_or_files = os.listdir(scanner_dir) dir_or_files = os.listdir(scanner_dir)
for dir_or_file in dir_or_files: for dir_or_file in dir_or_files:
dir_file_path = os.path.join(scanner_dir, dir_or_file) dir_file_path = os.path.join(scanner_dir,dir_or_file)
if os.path.isdir(dir_file_path): if os.path.isdir(dir_file_path):
self.__get_scanner_file__(dir_file_path, scanner_file_suffixs) self.__get_scanner_file__(dir_file_path,scanner_file_suffixs)
else: else:
if ("." not in dir_or_file) or (len(dir_or_file.split(".")) < 1) or ( if ("." not in dir_or_file) or (len(dir_or_file.split(".")) < 1) or (dir_or_file.split(".")[-1] not in scanner_file_suffixs):
dir_or_file.split(".")[-1] not in scanner_file_suffixs):
continue continue
self.file_queue.put(dir_file_path) self.file_queue.put(dir_file_path)
for component in config.filter_components: for component in config.filter_components:
comp = component.replace(".", "/") comp = component.replace(".","/")
if (comp in dir_file_path): if(comp in dir_file_path):
if (component not in self.comp_list): if(component not in self.comp_list):
self.comp_list.append(component) self.comp_list.append(component)
def __shell_test__(self, output): def __shell_test__(self,output):
am_path = os.path.join(output, "AndroidManifest.xml") am_path = os.path.join(output,"AndroidManifest.xml")
with open(am_path, "r", encoding='utf-8', errors='ignore') as f: with open(am_path,"r",encoding='utf-8',errors='ignore') as f:
am_str = f.read() am_str = f.read()
am_package = re.compile(r'<manifest.*package=\"(.*?)\".*') am_package= re.compile(r'<manifest.*package=\"(.*?)\".*')
apackage = am_package.findall(am_str) apackage = am_package.findall(am_str)
if len(apackage) >= 1: if len(apackage) >=1:
self.packagename = apackage[0] self.packagename = apackage[0]
self.file_identifier.append(apackage[0]) self.file_identifier.append(apackage[0])
am_name = re.compile(r'<application.*android:name=\"(.*?)\".*>') am_name = re.compile(r'<application.*android:name=\"(.*?)\".*>')
aname = am_name.findall(am_str) aname = am_name.findall(am_str)
if aname and len(aname) >= 1: if aname and len(aname)>=1:
if aname[0] in config.shell_list: if aname[0] in config.shell_list:
self.shell_flag = True self.shell_flag = True
am_permission = re.compile(r'<uses-permission android:name="(.*)"/>')
ampermissions = am_permission.findall(am_str)
for ampermission in ampermissions:
if ampermission in config.apk_permissions:
self.permissions.append(ampermission)

@ -3,7 +3,10 @@
# Author: kelvinBen # Author: kelvinBen
# Github: https://github.com/kelvinBen/AppInfoScanner # Github: https://github.com/kelvinBen/AppInfoScanner
import os import os
import re
import config import config
import logging
import threading
from queue import Queue from queue import Queue
import libs.core as cores import libs.core as cores
from libs.task.ios_task import iOSTask from libs.task.ios_task import iOSTask
@ -11,128 +14,157 @@ from libs.task.web_task import WebTask
from libs.task.net_task import NetTask from libs.task.net_task import NetTask
from libs.core.parses import ParsesThreads from libs.core.parses import ParsesThreads
from libs.task.android_task import AndroidTask from libs.task.android_task import AndroidTask
from libs.task.download_task import DownloadTask from libs.core.download import DownloadThreads
class BaseTask(object): class BaseTask(object):
thread_list = []
result_dict = {}
app_history_list = []
domain_history_list = []
# 统一初始化入口
def __init__(self, types="Android", inputs="", rules="", sniffer=True, threads=10, package=""):
self.types = types
self.path = inputs
if rules:
config.filter_strs.append(r'.*'+str(rules)+'.*')
self.sniffer = not sniffer
self.threads = threads
self.package = package
self.file_queue = Queue()
# 统一调度平台
def start(self): def __init__(self):
if cores.user_add_rules:
config.filter_strs.append(r'.*'+str(cores.user_add_rules)+'.*')
self.file_queue = Queue()
# 文件下载队列
self.download_file_queue = Queue()
# 文件下载列表
self.download_file_list = []
self.thread_list = []
self.app_history_list= []
self.domain_history_list = []
self.result_dict = {}
# 统一启动
def start(self, types="Android", user_input_path="", package=""):
# 如果输入路径为目录,且类型非web,则自动检索DEX、IPA、APK等文件
if not(types == "Web") and os.path.isdir(user_input_path):
self.__scanner_specified_file__(user_input_path)
# 如果输入的路径为txt, 则加载txt中的内容实现批量操作
elif user_input_path.endswith("txt"):
with open(user_input_path) as f:
lines = f.readlines()
for line in lines:
# http:// 或者 https:// 开头 或者 apk/dex/ipa结尾的文件且文件存在
if (line.startswith("http://") or line.startswith("https://")) or ((line.endswith("apk") or line.endswith(".dex") or line.endswith("ipa")) and os.path.exists(line)):
self.download_file_queue.put(line)
f.close()
else:
# 如果是文件或者类型为web的目录
self.download_file_queue.put(user_input_path)
# 长度小于1需重新选择目录
if self.download_file_queue.qsize() < 1:
raise Exception('[x] The specified DEX, IPA and APK files are not found. Please re-enter the directory to be scanned!')
# 统一文件下载中心
self.__download_file_center__(types)
for download_file in self.download_file_list:
file_path = download_file["path"]
types = download_file["type"]
# 控制中心
self.__control_center__(file_path, types)
# 统一文件下载中心
def __download_file_center__(self,types):
# 杜绝资源浪费
if self.download_file_queue.qsize() < cores.threads_num:
threads_num = self.download_file_queue.qsize()
else:
threads_num = cores.threads_num
print("[*] AI is analyzing filtering rules......") for threadID in range(1, threads_num):
threadName = "Thread - " + str(int(threadID))
thread = DownloadThreads(threadID, threadName, self.download_file_queue, self.download_file_list, types)
thread.start()
thread.join()
# 获取历史记录 # 控制中心
def __control_center__(self, file_path, types):
logging.info("[*] Processing {}".format(file_path))
logging.info("[*] AI is analyzing filtering rules......")
# 处理历史记录
self.__history_handle__() self.__history_handle__()
logging.info("[*] The filtering rules obtained by AI are as follows: {}".format(set(config.filter_no)))
print("[*] The filtering rules obtained by AI are as follows: %s" %
(set(config.filter_no)))
# 任务控制中心 # 任务控制中心
task_info = self.__tast_control__() task_info = self.__tast_control__(file_path, types)
if len(task_info) < 1: if len(task_info) < 1:
return return
# 文件队列
file_queue = task_info["file_queue"] file_queue = task_info["file_queue"]
# 是否存在壳
shell_flag = task_info["shell_flag"] shell_flag = task_info["shell_flag"]
# 组件列表(仅适用于Android)
comp_list = task_info["comp_list"] comp_list = task_info["comp_list"]
# 报名信息(仅适用于Android)
packagename = task_info["packagename"] packagename = task_info["packagename"]
# 文件标识符
file_identifier = task_info["file_identifier"] file_identifier = task_info["file_identifier"]
permissions = task_info["permissions"]
if shell_flag: if shell_flag:
print('[-] \033[3;31m Error: This application has shell, the retrieval results may not be accurate, Please remove the shell and try again!') logging.error('[x] This application has shell, the retrieval results may not be accurate, Please remove the shell and try again!')
return return
# 线程控制中心 # 线程控制中心
print( logging.info("[*] ========= Searching for strings that match the rules ===============")
"[*] ========= Searching for strings that match the rules ===============")
self.__threads_control__(file_queue) self.__threads_control__(file_queue)
# 等待线程结束 # 等待线程结束
for thread in self.thread_list: for thread in self.thread_list:
thread.join() thread.join()
# 结果输出中心 # 结果输出中心
self.__print_control__(packagename, comp_list, self.__print_control__(packagename,comp_list,file_identifier)
file_identifier, permissions)
def __tast_control__(self): # 任务控制中心
def __tast_control__(self, file_path, types):
task_info = {} task_info = {}
# 自动根据文件后缀名称进行修正
cache_info = DownloadTask().start(self.path, self.types) # 通过网络下载的文件如果不存在就直接返回任务控制中心
cacar_path = cache_info["path"] if (not os.path.exists(file_path) and cores.download_flag):
types = cache_info["type"] logging.error("[x] {} download failed! Please download the file manually and try again.".format(file_path))
if (not os.path.exists(cacar_path) and cores.download_flag):
print(
"[-] File download failed! Please download the file manually and try again.")
return task_info return task_info
# 调用Android 相关处理逻辑 # 调用Android 相关处理逻辑
if types == "Android": if types == "Android":
task_info = AndroidTask(cacar_path, self.package).start() task_info = AndroidTask(file_path, self.package).start()
# 调用iOS 相关处理逻辑 # 调用iOS 相关处理逻辑
elif types == "iOS": elif types == "iOS":
task_info = iOSTask(cacar_path).start() task_info = iOSTask(file_path).start()
# 调用Web 相关处理逻辑 # 调用Web 相关处理逻辑
else: else:
task_info = WebTask(cacar_path).start() task_info = WebTask(file_path).start()
return task_info return task_info
def __threads_control__(self, file_queue): # 线程控制中心
for threadID in range(1, self.threads): def __threads_control__(self,file_queue):
for threadID in range(1, cores.threads_num):
name = "Thread - " + str(int(threadID)) name = "Thread - " + str(int(threadID))
thread = ParsesThreads( thread = ParsesThreads(threadID,name,file_queue,self.result_dict,self.types)
threadID, name, file_queue, self.result_dict, self.types)
thread.start() thread.start()
self.thread_list.append(thread) self.thread_list.append(thread)
def __print_control__(self, packagename, comp_list, file_identifier, permissions): # 信息输出中心
txt_result_path = cores.txt_result_path def __print_control__(self,packagename,comp_list,file_identifier):
xls_result_path = cores.xls_result_path if cores.net_sniffer_flag:
all_flag = cores.all_flag logging.info("[*] ========= Sniffing the URL address of the search ===============")
NetTask(self.result_dict,self.app_history_list,self.domain_history_list,file_identifier).start()
if self.sniffer:
print( if packagename:
"[*] ========= Sniffing the URL address of the search ===============") logging.info("[*] ========= The package name of this APP is: ===============")
NetTask(self.result_dict, self.app_history_list, logging.info(packagename)
self.domain_history_list, file_identifier, self.threads).start()
if packagename:
print("[*] ========= The package name of this APP is: ===============")
print(packagename)
if len(comp_list) != 0: if len(comp_list) != 0:
print("[*] ========= Component information is as follows: ===============") logging.info("[*] ========= Component information is as follows :===============")
for json in comp_list: for json in comp_list:
print(json) logging.info(json)
if len(permissions) != 0: if cores.all_flag:
print(
"[*] ========= Sensitive permission information is as follows: ===============")
for permission in permissions:
print(permission)
if all_flag:
value_list = [] value_list = []
with open(txt_result_path, "a+", encoding='utf-8', errors='ignore') as f: with open(txt_result_path,"a+",encoding='utf-8',errors='ignore') as f:
for key, value in self.result_dict.items(): for key,value in self.result_dict.items():
f.write(key+"\r") f.write(key+"\r")
for result in value: for result in value:
if result in value_list: if result in value_list:
@ -140,37 +172,46 @@ class BaseTask(object):
value_list.append(result) value_list.append(result)
f.write("\t"+result+"\r") f.write("\t"+result+"\r")
f.close() f.close()
print("[*] For more information about the search, see TXT file result: %s" % logging.info("[>] For more information about the search, see TXT file result: {}".format(cores.txt_result_path))
(txt_result_path))
if self.sniffer: if cores.net_sniffer_flag:
print("[*] For more information about the search, see XLSX file result: %s" % logging.info("[>] For more information about the search, see XLS file result: {}".format(cores.xls_result_path))
(xls_result_path))
# 获取历史记录
def __history_handle__(self): def __history_handle__(self):
domain_history_path = cores.domain_history_path domain_history_path = cores.domain_history_path
app_history_path = cores.app_history_path app_history_path = cores.app_history_path
if os.path.exists(domain_history_path): if os.path.exists(domain_history_path):
domain_counts = {} domain_counts = {}
app_size = 0 app_size = 0
with open(app_history_path, "r", encoding='utf-8', errors='ignore') as f: with open(app_history_path,"r",encoding='utf-8',errors='ignore') as f:
lines = f.readlines() lines = f.readlines()
app_size = len(lines) app_size = len(lines)
for line in lines: for line in lines:
self.app_history_list.append( self.app_history_list.append(line.replace("\r","").replace("\n",""))
line.replace("\r", "").replace("\n", ""))
f.close() f.close()
with open(domain_history_path, "r", encoding='utf-8', errors='ignore') as f: with open(domain_history_path,"r",encoding='utf-8',errors='ignore') as f:
lines = f.readlines() lines = f.readlines()
cout = 3 cout = 3
if (app_size > 3) and (app_size % 3 == 0): if (app_size>3) and (app_size%3==0):
cout = cout + 1 cout = cout + 1
for line in lines: for line in lines:
domain = line.replace("\r", "").replace("\n", "") domain = line.replace("\r","").replace("\n","")
self.domain_history_list.append(domain) self.domain_history_list.append(domain)
domain_count = lines.count(line) domain_count = lines.count(line)
if domain_count >= cout: if domain_count >= cout:
config.filter_no.append(".*" + domain) config.filter_no.append(".*" + domain)
f.close() f.close()
# 扫描指定后缀文件
def __scanner_specified_file__(self, base_dir, file_suffix=['dex','ipa','apk']):
files = os.listdir(base_dir)
for file in files:
dir_or_file_path = os.path.join(base_dir,file)
if os.path.isdir(dir_or_file_path):
self.__scanner_specified_file__(dir_or_file_path,file_suffix)
else:
if dir_or_file_path.split(".")[-1] in file_suffix:
self.download_file_queue.put(dir_or_file_path)

@ -7,42 +7,72 @@ import re
import time import time
import config import config
import hashlib import hashlib
import logging
from queue import Queue from queue import Queue
import libs.core as cores import libs.core as cores
from libs.core.download import DownloadThreads from libs.core.download import DownloadThreads
class DownloadTask(object): class DownloadTask(object):
def __init__(self):
self.download_file_queue = Queue()
self.thread_list = []
def start(self, path, types): def start(self, path, types):
self.__local_or_remote__(path, types)
for threadID in range(1, cores.threads_num):
name = "Thread - " + str(int(threadID))
thread = DownloadThreads(threadID,name,self.download_file_queue)
thread.start()
thread.join()
# 判断文件是本地加载还是远程加载
def __local_or_remote__(self,path,types):
# 添加文件后缀判断
self.__update_type__(path)
# 处理本地文件
if not(path.startswith("http://") or path.startswith("https://")):
if not os.path.isdir(path): # 不是目录
return {"path":path,"type":types}
else: # 目录处理
return {"path":path,"type":types}
else:
self.__net_header__(path,types)
# self.download_file_queue.put(path)
# 处理网络请求
def __net_header__(self, path, types):
create_time = time.strftime("%Y%m%d%H%M%S", time.localtime()) create_time = time.strftime("%Y%m%d%H%M%S", time.localtime())
if path.endswith("apk"): if path.endswith("apk") or types == "Android":
types = "Android" types = "Android"
file_name = create_time + ".apk" file_name = create_time+ ".apk"
elif path.endswith("ipa"): elif path.endswith("ipa") or types == "iOS":
types = "iOS" types = "iOS"
file_name = create_time + ".ipa" file_name = create_time + ".ipa"
else: else:
if types == "Android": types = "WEB"
types = "Android" file_name = create_time + ".html"
file_name = create_time + ".apk"
elif types == "iOS": logging.info("[*] Detected that the task is not local, preparing to download file......")
types = "iOS" cache_path = os.path.join(cores.download_dir, file_name)
self.download_file_queue.put({"path":path, "cache_path":cache_path, "types":types})
# thread = DownloadThreads(path,file_name,cache_path,types)
# thread.start()
# thread.join()
return {"path":cache_path,"type":types}
def __update_type__(self, path, types, file_name=None):
create_time = time.strftime("%Y%m%d%H%M%S", time.localtime())
if path.endswith("apk") or types == "Android":
types = "Android"
if not file_name:
file_name = create_time+ ".apk"
elif path.endswith("ipa") or types == "iOS":
types = "iOS"
if not file_name:
file_name = create_time + ".ipa" file_name = create_time + ".ipa"
else:
types = "WEB"
file_name = create_time + ".html"
if not(path.startswith("http://") or path.startswith("https://")):
if not os.path.isdir(path): # 不是目录
return {"path": path, "type": types}
else: # 目录处理
return {"path": path, "type": types}
else: else:
print( types = "WEB"
"[*] Detected that the task is not local, preparing to download file......") if not file_name:
cache_path = os.path.join(cores.download_path, file_name) file_name = create_time + ".html"
thread = DownloadThreads(path, file_name, cache_path, types) return types,file_name
thread.start()
thread.join()
print()
return {"path": cache_path, "type": types}

@ -3,142 +3,95 @@
# Author: kelvinBen # Author: kelvinBen
# Github: https://github.com/kelvinBen/AppInfoScanner # Github: https://github.com/kelvinBen/AppInfoScanner
import os import os
import re
import shutil
import zipfile import zipfile
import binascii import binascii
import platform import platform
import libs.core as cores import libs.core as cores
from queue import Queue from queue import Queue
class iOSTask(object): class iOSTask(object):
elf_file_name = "" elf_file_name = ""
def __init__(self,path):
def __init__(self, path):
self.path = path self.path = path
self.file_queue = Queue() self.file_queue = Queue()
self.shell_flag = False self.shell_flag = False
self.file_identifier = [] self.file_identifier= []
self.permissions = []
def start(self): def start(self):
file_path = self.path file_path = self.path
if file_path.split(".")[-1] == 'ipa': if file_path.split(".")[-1] == 'ipa':
self.__decode_ipa__(cores.output_path) self.__decode_ipa__(cores.output_path)
self.__scanner_file_by_ipa__(cores.output_path) elif self.__get_file_header__(file_path):
elif self.__get_file_header__(file_path):
self.file_queue.put(file_path) self.file_queue.put(file_path)
else: else:
raise Exception( raise Exception("Retrieval of this file type is not supported. Select IPA file or Mach-o file.")
"Retrieval of this file type is not supported. Select IPA file or Mach-o file.") return {"shell_flag":self.shell_flag,"file_queue":self.file_queue,"comp_list":[],"packagename":None,"file_identifier":self.file_identifier}
return {"shell_flag": self.shell_flag, "file_queue": self.file_queue, "comp_list": [], "packagename": None, "file_identifier": self.file_identifier, "permissions": self.permissions}
def __get_file_header__(self,file_path):
def __get_file_header__(self, file_path): crypt_load_command_hex = "2C000000"
hex_hand = 0x0 macho_name = os.path.split(file_path)[-1]
macho_name = os.path.split(file_path)[-1]
self.file_identifier.append(macho_name) self.file_identifier.append(macho_name)
with open(file_path, "rb") as macho_file: with open(file_path,"rb") as macho_file:
macho_file.seek(hex_hand, 0) macho_file.seek(0x0,0)
magic = binascii.hexlify(macho_file.read(4)).decode().upper() magic = binascii.hexlify(macho_file.read(4)).decode().upper()
macho_magics = ["CFFAEDFE", "CEFAEDFE", "BEBAFECA", "CAFEBABE"] macho_magics = ["CFFAEDFE","CEFAEDFE","BEBAFECA","CAFEBABE"]
if magic in macho_magics: if magic in macho_magics:
self.__shell_test__(macho_file, hex_hand) hex_str = binascii.hexlify(macho_file.read()).decode().upper()
if crypt_load_command_hex in hex_str:
macho_file.seek(int(hex_str.index("2C000000")/2)+20,0)
cryptid = binascii.hexlify(macho_file.read(4)).decode()
if cryptid == "01000000":
self.shell_flag = True
macho_file.close() macho_file.close()
return True return True
macho_file.close() macho_file.close()
return False return False
def __shell_test__(self, macho_file, hex_hand): def __get_scanner_file__(self,scanner_dir,file_suffix):
while True:
magic = binascii.hexlify(macho_file.read(4)).decode().upper()
if magic == "2C000000":
macho_file.seek(hex_hand, 0)
encryption_info_command = binascii.hexlify(
macho_file.read(24)).decode()
cryptid = encryption_info_command[-8:len(
encryption_info_command)]
if cryptid == "01000000":
self.shell_flag = True
break
hex_hand = hex_hand + 4
def __scanner_file_by_ipa__(self, output):
scanner_file_suffix = ["plist", "js", "xml", "html"]
scanner_dir = os.path.join(output, "Payload")
self.__get_scanner_file__(scanner_dir, scanner_file_suffix)
def __get_scanner_file__(self, scanner_dir, file_suffix):
dir_or_files = os.listdir(scanner_dir) dir_or_files = os.listdir(scanner_dir)
for dir_file in dir_or_files: for dir_file in dir_or_files:
dir_file_path = os.path.join(scanner_dir, dir_file) dir_file_path = os.path.join(scanner_dir,dir_file)
if os.path.isdir(dir_file_path): if os.path.isdir(dir_file_path):
if dir_file.endswith(".app"): if dir_file.endswith(".app"):
self.elf_file_name = dir_file.replace(".app", "") self.elf_file_name = dir_file.replace(".app","")
self.__get_scanner_file__(dir_file_path, file_suffix) self.__get_scanner_file__(dir_file_path,file_suffix)
else: else:
if self.elf_file_name == dir_file: if self.elf_file_name == dir_file:
self.__get_file_header__(dir_file_path) self.__get_file_header__(dir_file_path)
self.file_queue.put(dir_file_path) self.file_queue.put(dir_file_path)
continue continue
if cores.resource_flag: if cores.resource_flag:
dir_file_suffix = dir_file.split(".") dir_file_suffix = dir_file.split(".")
if len(dir_file_suffix) > 1: if len(dir_file_suffix) > 1:
if dir_file_suffix[-1] in file_suffix: if dir_file_suffix[-1] in file_suffix:
self.__get_file_header__(dir_file_path) self.__get_file_header__(dir_file_path)
self.file_queue.put(dir_file_path) self.file_queue.put(dir_file_path)
def __decode_ipa__(self, output_path): def __decode_ipa__(self,output_path):
with zipfile.ZipFile(self.path, "r") as zip_files: scanner_file_suffix = ["plist","js","xml","html"]
zip_file_names = zip_files.namelist() scanner_dir = os.path.join(output_path,"Payload")
zip_files.extract(zip_file_names[0], output_path)
try:
new_zip_file = zip_file_names[0].encode(
'cp437').decode('utf-8')
except UnicodeEncodeError:
new_zip_file = zip_file_names[0].encode(
'utf-8').decode('utf-8')
old_zip_dir = self.__get_parse_dir__(
output_path, zip_file_names[0])
new_zip_dir = self.__get_parse_dir__(output_path, new_zip_file)
os.rename(old_zip_dir, new_zip_dir)
for zip_file in zip_file_names: with zipfile.ZipFile(self.path,"r") as zip_files:
old_ext_path = zip_files.extract(zip_file, output_path) zip_file_names = zip_files.namelist()
if not "Payload" in old_ext_path: for zip_file_name in zip_file_names:
continue
start = str(old_ext_path).index("Payload")
dir_path = old_ext_path[start:len(old_ext_path)]
old_ext_path = os.path.join(output_path, dir_path)
try: try:
new_zip_file = zip_file.encode('cp437').decode('utf-8') if platform.system() == "Windows":
new_file_name = zip_file_name.encode('cp437').decode('GBK')
else:
new_file_name = zip_file_name.encode('cp437').decode('utf-8')
except UnicodeEncodeError: except UnicodeEncodeError:
new_zip_file = zip_file.encode('utf-8').decode('utf-8') new_file_name = zip_file_name.encode('utf-8').decode('utf-8')
new_ext_path = os.path.join(output_path, new_zip_file)
if platform.system() == "Windows":
new_ext_path = new_ext_path.replace("/", "\\")
if not os.path.exists(new_ext_path): new_ext_file_path = os.path.join(output_path,new_file_name)
dir_path = os.path.dirname(new_ext_path) ext_file_path = zip_files.extract(zip_file_name,output_path)
if not os.path.exists(dir_path): os.rename(ext_file_path,new_ext_file_path)
os.makedirs(dir_path) self.__get_scanner_file__(scanner_dir,scanner_file_suffix)
shutil.move(old_ext_path, new_ext_path)
# 当旧目录与新目录不一致时,删除旧的目录
if not (old_ext_path == new_ext_path) and os.path.exists(old_ext_path) and (".app" in old_ext_path):
try:
# mac发生权限问题的时候做处理
os.remove(old_ext_path)
except Exception:
shutil.rmtree(old_ext_path)
def __get_parse_dir__(self, output_path, file_path): def __get_parse_dir__(self,output_path,file_path):
start = file_path.index("Payload/") start = file_path.index("Payload/")
end = file_path.index(".app") end = file_path.index(".app")
root_dir = file_path[start:end] root_dir = file_path[start:end]
if platform.system() == "Windows": if platform.system() == "Windows":
root_dir = root_dir.replace("/", "\\") root_dir = root_dir.replace("/","\\")
old_root_dir = os.path.join(output_path, root_dir+".app") old_root_dir = os.path.join(output_path,root_dir+".app")
return old_root_dir return old_root_dir

@ -2,109 +2,101 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Author: kelvinBen # Author: kelvinBen
# Github: https://github.com/kelvinBen/AppInfoScanner # Github: https://github.com/kelvinBen/AppInfoScanner
import openpyxl
import config import config
from queue import Queue from queue import Queue
import libs.core as cores import libs.core as cores
from openpyxl import Workbook
from libs.core.net import NetThreads from libs.core.net import NetThreads
class NetTask(object): class NetTask(object):
value_list = [] value_list = []
domain_list = [] domain_list=[]
def __init__(self, result_dict, app_history_list, domain_history_list, file_identifier, threads): def __init__(self,result_dict,app_history_list,domain_history_list,file_identifier):
self.result_dict = result_dict self.result_dict = result_dict
self.app_history_list = app_history_list self.app_history_list = app_history_list
self.domain_history_list = domain_history_list
self.file_identifier = file_identifier self.file_identifier = file_identifier
self.domain_queue = Queue() self.domain_queue = Queue()
self.threads = int(threads)
self.thread_list = [] self.thread_list = []
self.domain_history_list = domain_history_list
def start(self): def start(self):
xls_result_path = cores.xls_result_path xls_result_path = cores.xls_result_path
workbook = openpyxl.Workbook() workbook = Workbook()
worksheet = self.__creating_excel_header__(workbook) worksheet = self.__creating_excel_header__(workbook)
self.__write_result_to_txt__() self.__write_result_to_txt__()
self.__start_threads__(worksheet) self.__start_threads__(worksheet)
for thread in self.thread_list: for thread in self.thread_list:
thread.join() thread.join()
workbook.save(xls_result_path) workbook.save(xls_result_path)
def __creating_excel_header__(self, workbook): def __creating_excel_header__(self,workbook):
worksheet = workbook.create_sheet("Result", 0) worksheet = workbook.create_sheet("Result",0)
worksheet.cell(row=1, column=1, value="Number") excel_headers = ["Number","IP/URL","Domain","Status","IP","Server","Title","CDN","Finger"]
worksheet.cell(row=1, column=2, value="IP/URL") for head_cell in excel_headers:
worksheet.cell(row=1, column=3, value="Domain") column = excel_headers.index(head_cell) + 1
worksheet.cell(row=1, column=4, value="Status") worksheet.cell(row=1, column=column).value = head_cell
worksheet.cell(row=1, column=5, value="IP")
worksheet.cell(row=1, column=6, value="Server")
worksheet.cell(row=1, column=7, value="Title")
worksheet.cell(row=1, column=8, value="CDN")
worksheet.cell(row=1, column=9, value="Finger")
return worksheet return worksheet
def __write_result_to_txt__(self): def __write_result_to_txt__(self):
txt_result_path = cores.txt_result_path
append_file_flag = True append_file_flag = True
for key, value in self.result_dict.items(): for key,value in self.result_dict.items():
for result in value: for result in value:
if result in self.value_list: if result in self.value_list:
continue continue
self.value_list.append(result) self.value_list.append(result)
if (("http://" in result) or ("https://" in result)) and ("." in result): if (("http://" in result) or ("https://" in result)) and ("." in result):
domain = result.replace("https://","").replace("http://","")
if "{" in result or "}" in result or "[" in result or "]" in result or "\\" in result or "!" in result or "," in result: if "{" in result or "}" in result or "[" in result or "]" in result or "\\" in result or "!" in result or "," in result:
continue continue
domain = result.replace(
"https://", "").replace("http://", "")
if "/" in domain: if "/" in domain:
domain = domain[:domain.index("/")] domain = domain[:domain.index("/")]
if "|" in result: if "|" in result:
result = result[:result.index("|")] result = result[:result.index("|")]
# 目前流通的域名中加上协议头最短长度为11位 # 目前流通的域名中加上协议头最短长度为11位
if len(result) <= 10: if len(result) <= 10:
continue continue
url_suffix = result[result.rindex(".")+1:].lower() url_suffix = result[result.rindex(".")+1:].lower()
if not(cores.resource_flag and url_suffix in config.sniffer_filter): if not(cores.resource_flag and url_suffix in config.sniffer_filter):
self.domain_queue.put( self.domain_queue.put({"domain":domain,"url_ip":result})
{"domain": domain, "url_ip": result})
for identifier in self.file_identifier: for identifier in self.file_identifier:
if identifier in self.app_history_list: if identifier in self.app_history_list:
if not(domain in self.domain_history_list): if not(domain in self.domain_history_list):
self.domain_list.append(domain) self.domain_list.append(domain)
self.__write_content_in_file__( self.__write_content_in_file__(cores.domain_history_path,domain)
cores.domain_history_path, domain)
continue continue
if not(domain in self.domain_list): if not(domain in self.domain_list):
self.domain_list.append(domain) self.domain_list.append(domain)
self.__write_content_in_file__( self.__write_content_in_file__(cores.domain_history_path,domain)
cores.domain_history_path, domain)
if append_file_flag: if append_file_flag:
self.__write_content_in_file__( self.__write_content_in_file__(cores.app_history_path,identifier)
cores.app_history_path, identifier)
append_file_flag = False append_file_flag = False
def __start_threads__(self, worksheet): def __start_threads__(self,worksheet):
for threadID in range(0, self.threads): for threadID in range(0, cores.threads_num):
name = "Thread - " + str(threadID) name = "Thread - " + str(threadID)
thread = NetThreads(threadID, name, self.domain_queue, worksheet) thread = NetThreads(threadID,name,self.domain_queue,worksheet)
thread.start() thread.start()
self.thread_list.append(thread) self.thread_list.append(thread)
def __write_content_in_file__(self, file_path, content): def __write_content_in_file__(self,file_path,content):
with open(file_path, "a+", encoding='utf-8', errors='ignore') as f: with open(file_path,"a+",encoding='utf-8',errors='ignore') as f:
f.write(content+"\r") f.write(content+"\r")
f.close() f.close()

@ -7,9 +7,8 @@ import config
import hashlib import hashlib
from queue import Queue from queue import Queue
class WebTask(object): class WebTask(object):
thread_list = [] thread_list =[]
value_list = [] value_list = []
result_dict = {} result_dict = {}
@ -17,32 +16,31 @@ class WebTask(object):
self.path = path self.path = path
self.file_queue = Queue() self.file_queue = Queue()
self.file_identifier = [] self.file_identifier = []
self.permissions = []
def start(self): def start(self):
if len(config.web_file_suffix) <= 0: if len(config.web_file_suffix) <=0:
scanner_file_suffix = ["html", "js", "html", "xml"] scanner_file_suffix = ["html","js","html","xml"]
scanner_file_suffix = config.web_file_suffix scanner_file_suffix = config.web_file_suffix
if os.path.isdir(self.path): if os.path.isdir(self.path):
self.__get_scanner_file__(self.path, scanner_file_suffix) self.__get_scanner_file__(self.path,scanner_file_suffix)
else: else:
if not (self.path.split(".")[-1] in scanner_file_suffix): if not (self.path.split(".")[-1] in scanner_file_suffix):
err_info = ("Retrieval of this file type is not supported. Select a file or directory with a suffix of %s" % ",".join(scanner_file_suffix)) err_info = ("Retrieval of this file type is not supported. Select a file or directory with a suffix of %s" % ",".join(scanner_file_suffix))
raise Exception(err_info) raise Exception(err_info)
self.file_queue.put(self.path) self.file_queue.put(self.path)
return {"comp_list": [], "shell_flag": False, "file_queue": self.file_queue, "packagename": None, "file_identifier": self.file_identifier, "permissions": self.permissions} return {"comp_list":[],"shell_flag":False,"file_queue":self.file_queue,"packagename":None,"file_identifier":self.file_identifier}
def __get_scanner_file__(self, scanner_dir, file_suffix): def __get_scanner_file__(self,scanner_dir,file_suffix):
dir_or_files = os.listdir(scanner_dir) dir_or_files = os.listdir(scanner_dir)
for dir_file in dir_or_files: for dir_file in dir_or_files:
dir_file_path = os.path.join(scanner_dir, dir_file) dir_file_path = os.path.join(scanner_dir,dir_file)
if os.path.isdir(dir_file_path): if os.path.isdir(dir_file_path):
self.__get_scanner_file__(dir_file_path, file_suffix) self.__get_scanner_file__(dir_file_path,file_suffix)
else: else:
if len(dir_file.split(".")) > 1: if len(dir_file.split("."))>1:
if dir_file.split(".")[-1] in file_suffix: if dir_file.split(".")[-1] in file_suffix:
with open(dir_file_path,'rb') as f: with open(dir_file_path,'rb') as f:
dex_md5 = str(hashlib.md5().update(f.read()).hexdigest()).upper() dex_md5 = str(hashlib.md5().update(f.read()).hexdigest()).upper()
self.file_identifier.append(dex_md5) self.file_identifier.append(dex_md5)
self.file_queue.put(dir_file_path) self.file_queue.put(dir_file_path)

@ -1,5 +1,5 @@
requests requests
click click
openpyxl xlwt
frida-tools==11.0.0 pillow
frida-dexdump openpyxl

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

@ -1,14 +1,15 @@
### V1.0.9 ### V1.0.9
- 更新apktool为最新版本 - 新增IPA批量操作
- 优化部分环节流程 - 优化iOS壳检测速度
- 修复excle文件导出时超时行数限制 - 优化iPA文件解压效率
- 修复脚本执行时卡顿的问题 - 优化日志输出
- 修复Mac下Playload文件权限不足的问题 - 修复内容过长无法输出到excle
### V1.0.8 ### V1.0.8
- 添加AK和SK的检测 - 新增AK和SK的检测
- 添加检测规则提交入口 - 新增检测规则提交入口
- 添加.gitignore文件 - 新增.gitignore文件
- 优化txt结果集输出方式 - 优化txt结果集输出方式
- 修复目录中包含空格无法解析的问题 - 修复目录中包含空格无法解析的问题
- 修复WEB页面或者目录扫描的问题 - 修复WEB页面或者目录扫描的问题

Loading…
Cancel
Save