Commit d2f2ce66 authored by amirbitbucket's avatar amirbitbucket
Browse files

README.md edited online with Bitbucket

Former-commit-id: 991dade6684aca35fe1c3295b6388c3f10e20e5b [formerly 3f9e3b6cd7a583763aa435ab52aa46847febb9cf]
Former-commit-id: 5f18fcfcbad85304f68e0f105ca6f4bdb9de70fd
parent 827f3f71
**AxBench** is a benchmark suite for approximate computing. We develop **AxBench** in C++, aiming to provide a set of representative applications from various domains to explore different aspect of the approximate computing. **AxBench** is developed in Alternative Computing Technologies (ACT) Laboratory, Georgia Institute of Technology.
**AxBench** is a benchmark suite with the necessary annotations for approximate computing. We develop **AxBench** in C++, aiming to provide a set of representative applications from various domains to explore different aspect of the approximate computing. **AxBench** is developed in Alternative Computing Technologies (ACT) Laboratory, Georgia Institute of Technology.
*** === Papers === ***
We actively work on **AxBench** to add more applications from different domains (e.g. Computer Vision, Data Analytics, Multimedia, Web Search, Finance, etc.). We will also be working on adding different features to this benchmark suite in order to enable researchers to study different aspects of approximate computing. As a courtesy to the developers, we ask that you please cite our papers from MICRO'12 and ISCA'14 describing the suite:
We actively work on **AxBench** to add more applications from different domains (e.g. Computer Vision, Data Analytics, Multimedia, Web Search, Finance, etc.). We will also be working on adding different features to this benchmark suite in order to enable researchers to study different aspects of approximate computing. As a courtesy to the developers, we ask that you please cite our papers from MICRO'12 describing the suite:
1. H. Esmaeilzadeh, A. Sampson, L. Ceze, D. Burger,
"Neural acceleration for general-purpose approximate programs", MICRO 2012.
2. R. Amant, A. Yazdanbakhsh, J. Park, B. Thwaites, H. Esmaeilzadeh, A. Hassibi, L. Ceze, D. Burger,
"General-Purpose Code Acceleration with Limited-Precision Analog Computation", ISCA 2014.
*** === Applications === ***
......@@ -31,13 +28,45 @@ We actively work on **AxBench** to add more applications from different domains
*** === Build and Run AxBench ===***
1) After downloading the **AxBench**, please go to the *anpu.compiler* directory and run the Makefile. It will create a static library which will be later used to execute the applications on analog neural network model.
1) After downloading the **AxBench**, please go to the **parrot.c/src** directory and run **bash buildlib.sh**. It will create a static library which will be later used to execute the Parrot transformation on the applications.
2) Then, in the root directory specify the **full path** location of the *anpu.compiler* and the location of the FANN library in *config.mk*.
2) Then, modify **config.mk** in the **applications** folder with the location of the **Parrot** and **FANN library**.
3) You are set to use **AxBench**. You can simply execute the *run.sh* script to make or run each of the applications.
3) You are set to use **AxBench**. You can simply execute the **run.sh** script to make or run each of the applications.
**AxBench** can be run in precise or various approximate modes. Currently, we support two modes, namely *NPU_OBSERVATION* and *NPU_FANN*. The observation mode simply runs the applications in the precise mode and generates the precise outputs in the data directory. The FANN mode runs the applications on a NN and generates the approximate output and reports the output error. The NN configuration for each application are placed in the corresponding application directory inside the *cfg* directory.
*** === Compilation Parameters ===***
There are some parameters that need to be specified by the user during the compilation. Here you can see a brief explanation about each of these parameters.
1) ** Learning rate: ** Rate of learning for RPROP algorithm.
2) ** Epoch number: ** Number of epochs for training.
3) ** Sampling Rate: ** The percentage of data which is used for training and testing.
4) ** Test data fraction: ** The percentage of sampled data which is used for testing the obtained neural network.
5) ** Maximum number of layers: ** The maximum number of layers in the neural network.
6) ** Maximum number of neurons per layer: ** The maximum number of neurons per each hidden layer.
*** === Adding new benchmarks ===***
You can easily add new benchmarks to **AxBench**. These are the necessary steps that need to be followed.
1) Run ** bash run.sh setup <application name>**.
2) Put the source files into the **src** directory and annotate the region of interest with the **Parrot** semantics.
3) Put the train and test datasets into their corresponding folders (train.data and test.data).
4) Create ** Makefile **, **Makefile_nn**, **run_observation.sh**, and **run_NN.sh**. You may get help on how to create these files from other application directories.
5) Run ** bash run.sh make <application name>** to build the application.
6) Run ** bash run.sh run <application name>** to apply Parrot transformation and replace the region of interest with a neural network.
*** === Software License === ***
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment