Benign and malignant cancers are the most common skin cancer types. It is essential to know skin cancer symptoms with an early diagnosis to provide an appropriate treatment and reduce the mortality rate. Dermoscopic image is one of the diagnostic media that many researchers have developed. It provides more optimal results in computational-based diagnosis than visual detection. Deep learning and transfer learning are two models that have been used successfully in computational-based analysis, although optimization is still needed. In this study, transfer learning was used to separate dermoscopic images of skin cancer into two categories: benign and malignant. This study used 2,000 images to increase previous research’s accuracy conducted on the Kaggle public dataset containing 3,297 images. Two pretrained models, namely VGG-16 and residual network (ResNet)-50, were compared and used as feature extractors. Fine-tuning was conducted by adding a flatten layer, two dense layers with the ReLU activation function, and one dense layer with the Softmax activation function to classify images into two categories. Hyperparameter tuning on the optimizer, batch size, learning rate, and epoch were performed to get each model’s best performance parameter combination. Before hyperparameter tuning, the model was tested by resizing the input image using different sizes. The results of model testing on the VGG-16 model gave the best performance at an image size of 128 × 128 pixels with a combination of Adam parameters as an optimizer, batch size of 64, learning rate of 0.001, and epoch of 10 with an accuracy value of 91% and loss of 0.2712. The ResNet-50 model yielded better accuracy of 94% and a loss of 0.2198 using the optimizer parameter RMSprop, batch size of 64, learning rate of 0.0001, and epoch of 100. The results indicate that the proposed method provides good accuracies and can assist dermatologist in the early detection of skin cancer.