ANALISIS PENGGUNAAN KOMBINASI WORD EMBEDDING, RESNET, DAN GRU PADA MODEL PIX2CODE

Fadly Triansyah Rahman, Farhan Rahmat Abdillah, Transmissia Semiawan, Nurjannah Syakrani

Abstract


The pix2code model is a machine learning model that can automate the mockup implementation process into code from a Graphical User Interface (GUI). First developed by Tony Beltramelli, this model has a rating of 77%. The development of the pix2code model is considered important because, by increasing accuracy, the code generated by the model is accurate. This study develops a pix2code model using a combination of word embedding, Residual Network (ResNet), and GRU methods. The data used comes from the research of Tony Beltramelli as many as 3500 datasets consisting of mockup images and the context of the image. The results of this study indicate that the combination of word embedding, ResNet, and GRU has succeeded in increasing the accuracy of the pix2code model from 77% to 80%. In addition, the value of the distribution variance of the pix2code model's accuracy decreased from 0.078782 to 0.046656. This shows that with the variation of the data used, the accuracy of the developed pix2code model becomes more stable. 

Full Text:

PDF

References


Balog, M., Gaunt, A. L., Brockschmidt, M., Nowozin, S., & Tarlow, D. (2017). DeepCoder: Learning to write programs. 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings.

Beltramelli, T. (2017). pix2code: Generating Code from a Graphical User Interface Screenshot. arXiv, 1–9.

Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. 1–9. http://arxiv.org/abs/1412.3555

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-Decem, 770–778. https://doi.org/10.1109/CVPR.2016.90

Lai, S., Liu, K., He, S., & Zhao, J. (2016). How to Generate a Good Word Embedding. IEEE Intelligent Systems, 31(6), 5–14. https://doi.org/10.1109/MIS.2016.45

Levy, O., & Goldberg, Y. (2014). Neural word embedding as implicit matrix factorization. Advances in Neural Information Processing Systems, 3(January), 2177–2185.

Liu, Y., Hu, Q., & Shu, K. (2018). Improving pix2code based bi-directional LSTM. Proceedings of 2018 IEEE International Conference on Automation, Electronics and Electrical Engineering, AUTEEE 2018, 220–223. https://doi.org/10.1109/AUTEEE.2018.8720784

Pande Made Risky Cahya Dinatha, & Nur Aini Rakhmawati. (2020). Komparasi Term Weighting dan Word Embedding pada Klasifikasi Tweet Pemerintah Daerah. Jurnal Nasional Teknik Elektro dan Teknologi Informasi, 9(2), 155–161. https://doi.org/10.22146/jnteti.v9i2.90

Patterson, J., & Gibson, A. (2017). Deep Learning A Practioner’s Approach. In O’Reilly Media, Inc.

Taneja, P. (2017). Text Generation Using Different Recurrent Neural Networks. July.

Wan, Z., Xia, X., Lo, D., & Murphy, G. C. (2020). How does Machine Learning Change Software Development Practices? IEEE Transactions on Software Engineering, 1–14. https://doi.org/10.1109/TSE.2019.2937083

Yu, Y., Si, X., Hu, C., & Zhang, J. (2019). A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Computation, 31(7), 1235–1270. https://doi.org/10.1162/neco_a_01199

Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent Neural Network Regularization. 2013, 1–8. http://arxiv.org/abs/1409.2329




DOI: https://doi.org/10.31884/jtt.v10i1.356

Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 JTT (Jurnal Teknologi Terapan)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

View Stats

 

 Creative Common Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)