Tactile graphics are an essential tool for conveying visual information to visually impaired individuals. However, translating 2D plots, such as B´ezier curves, polygons, and bar charts, into an effective tactile format remains a challenge. This paper presents a novel, two-stage deep learning pipeline for automating this conversion process.Our method leverages a Pix2Pix architecture, employing a U-Net++ generatornetwork for robust image generation. To improve the perceptual quality of the tactilerepresentations, we incorporate an adversarial perceptual loss function alongside agradient penalty. The pipeline operates in a sequential manner: firstly, convertingthe source plot into a grayscale tactile representation, followed by a transformationinto a channel-wise equivalent.We evaluate the performance of our model on a comprehensive synthetic datasetconsisting of 20,000 source-target pairs encompassing various 2D plot types. Toquantify performance, we utilize fuzzy versions of established metrics like pixel accuracy, Dice coefficient, and Jaccard index. Additionally, a human study is conductedto assess the visual quality of the generated tactile graphics.The proposed approach demonstrates promising results, significantly streamliningthe conversion of 2D plots into tactile graphics. This paves the way for the development of fully automated systems, enhancing accessibility of visual information forvisually impaired individuals.