ServiceNow – ChatGPT – Image Generation (Dall-E)

I thought it would be interesting to do some image generation with ChatGPT, so I started using the Dall-E APIs that ChatGPT have provided.

This configuration allows you to generate images via script and attach them to a record.

To do this, firstly you will need to run through this article to get a basic ChatGPT configuration on your environment.

Once you have got that setup, you are ready to continue. Firstly, we need to add a new REST message endpoint. Open the “ChatGPT” REST message and create a new HTTP method.

NameImage
Http MethodPOST
Endpointhttps://api.openai.com/v1/images/generations

Next, we are going to create a brand new script include. This extends from the ChatGPT script include that was created in the initial configuration. Create a new script include with the following details:

NameChatGPTImageProcessing
DescriptionProvides Dall-E API configuration.
var ChatGPTImageProcessing = Class.create();
ChatGPTImageProcessing.prototype = Object.extendsObject(global.ChatGPT, {
    initialize: function() {},

    createImage: function(requested_image_text) {
        try {
            //this.logDebug("Submitting chat messages: " + JSON.stringify(messages));
            var request = new sn_ws.RESTMessageV2("ChatGPT", "Image");
            var payload = {
                //"model": "dall-e-3",
                "model": "dall-e-2",
                "prompt": requested_image_text,
                "response_format": "b64_json"
            };

            this.logDebug("Payload: " + JSON.stringify(payload));
            request.setRequestBody(JSON.stringify(payload));
            request.setRequestHeader("Content-Type", "application/json");
            var response = request.execute();
            var httpResponseStatus = response.getStatusCode();
            var httpResponseContentType = response.getHeader('Content-Type');
            if (httpResponseStatus === 200 && httpResponseContentType === 'application/json') {
                this.logDebug("ChatGPT Imaging API call was successful");
                this.logDebug("ChatGPT Response was: " + response.getBody());
				// Get the base64 response and return it
				var parseResponse = JSON.parse(response.getBody());
				var base64Response = parseResponse.data[0].b64_json;
                return base64Response;
            } else {
                gs.error('Error calling the ChatGPT API. HTTP Status: ' + httpResponseStatus + " - body is " + response.getBody(), "ChatGPTImageProcessing");
            }
        } catch (ex) {
            var exception_message = ex.getMessage();
            gs.error(exception_message, "ChatGPTImageProcessing");
        }
    },

    addImageAsAttachment: function(record, chatGPTResponse, fileName) {

        // Make sure the filename has .png at the end
        fileName = fileName.contains(".png") ? fileName : fileName + ".png";

        // The image API responds using either a URL or base64. We will use base64 as we can use that to attach it.
        var base64Bytes = GlideStringUtil.base64DecodeAsBytes(chatGPTResponse);

        var gsa = new GlideSysAttachment();
        var attachmentId = gsa.write(record, fileName, 'image/png', base64Bytes); // Write the attachment to the record.
        gs.print('Attachment created successfully: ' + attachmentId);

    },

    type: 'ChatGPTImageProcessing'
});

There are two functions here – firstly the createImage function which generates the image into base64 code. Secondly, the addImageAsAttachment function which will add the newly generated image to the GlideRecord you provide.

To test this, run the below fix script. It should add a new image called “cartoon_cat.png” to the fix script.

NameChatGPTImageProcessing Test
DescriptionTesting ChatGPT image processing
var fix_script = new GlideRecord('sys_script_fix');
if (fix_script.get('9f0dea4193400210d6f7fbf08bba10d4')) {
    var si = new global.ChatGPTImageProcessing();
    var image = si.createImage("Create an image of a fluffy cartoon cat that is wearing sunglasses");
    // Attach the image
    si.addImageAsAttachment(fix_script, image, "cartoon_cat.png");
}

If everything goes well, it should attach a file (you might need to refresh the script after running it to see the attachment):

The image it made me was this! I thought it was pretty cool.

ServiceNow – Testing Automatic Code Entry from ChatGPT – Initial Testing

This is purely a test at the moment and still needs some work. As mentioned in my previous post, I am looking to create a ChatGPT integration that will allow for ChatGPT to enter code directly into the environment.

To do this, I added several functions into the ChatGPT script include I created in the last post. The updated script include is included at the bottom of this post.

I’m thinking to use this I might create a table to store the ChatGPT requests.

The new functions are as follows:

FunctionNotes
extractAssistantMessageUsed to extract the response message from ChatGPT. Purely to make things a bit easier.
createScriptCreates a script on the system based on the response it receives from ChatGPT.
extractCodeBlocksNOT YET USED: This script extracts the code blocks that ChatGPT returns. Not used at present but might update if there are multiple code blocks.

To test the process, I created a fix script. This fix script asks ChatGPT to create a ServiceNow fix script to query for active users with a first name of Jon.

var chatGPT = new global.ChatGPT();
try {
    var premise = chatGPT.setPremise("You are writing a code block for use in ServiceNow. I understand you cannot write it into ServiceNow directly. You should respond as a JSON string with no additional text. The response should have the following keys: name (used as a simple name for the script), table (the script table name, E.G. fix script is sys_script_fix), code (the code you are providing), notes (any notes you have about the code).");
    var message1 = chatGPT.createMessage("user", "Can you write me a ServiceNow fix script to query for active users with a first name of Jon.");
    var result = chatGPT.submitChat([premise, message1]);
    chatGPT.logDebug("RESULT IS: " + result);

    var extract = chatGPT.extractAssistantMessage(result);
    chatGPT.logDebug("ASSISTANT MESSAGE IS: " + extract);

    var scriptId = chatGPT.createScript(extract);
    if (scriptId) {
        chatGPT.logDebug("Script was created successfully with id: " + scriptId);
    } else {
        chatGPT.logDebug("Script creation failed.");
    }
} catch (e) {
    gs.error("Error during execution: " + e.message, "ChatGPT");
}

When you run the fix script, you get the following responses. For the result:

RESULT IS: {
  "id": "chatcmpl-XXXXXXXXXXXXXXXXX",
  "object": "chat.completion",
  "created": 1690730521,
  "model": "gpt-3.5-turbo-0613",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "{\n  \"name\": \"ActiveUsersWithFirstNameJon\",\n  \"table\": \"sys_script_fix\",\n  \"code\": \"var grUsers = new GlideRecord('sys_user');\\n\\\ngrUsers.addQuery('active', true);\\n\\\ngrUsers.addQuery('first_name', 'Jon');\\n\\\ngrUsers.query();\\n\\\n\\n\\\nwhile (grUsers.next()) {\\n\\\n    gs.info('User: ' + grUsers.name);\\n\\\n}\",\n  \"notes\": \"This fix script queries the sys_user table for active users with a first name of 'Jon' and logs their names using the gs.info method.\"\n}"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 121,
    "completion_tokens": 133,
    "total_tokens": 254
  }
}

For the assistant message:

ASSISTANT MESSAGE IS: {
  "name": "Query Active Users with First Name Jon",
  "table": "sys_script_fix",
  "code": "var gr = new GlideRecord('sys_user');\n\ngr.addQuery('active', true);\ngr.addQuery('first_name', 'Jon');\ngr.query();",
  "notes": "This fix script queries the sys_user table for active users with a first name of Jon."
}

If all is well, you should get some messages saying the script has been created.

ChatGPT: Creating script with name: Query Active Users with First Name Jon
ChatGPT: Script created with sys_id: 02b1a8ae475831100dbe0bdbd36d43f0
ChatGPT: Script was created successfully with id: 02b1a8ae475831100dbe0bdbd36d43f0

I have seen a few issues with the response from ChatGPT having unescaped characters that the code doesn’t like. Trying to find a way around that.

Below is the updated ChatGPT script include with the new functions and some additional logging. Hope it helps.

var ChatGPT = Class.create();
ChatGPT.prototype = {
    debug: true, // Set to true to enable logging

    initialize: function() {
        this.model = "gpt-3.5-turbo";
        this.logDebug("ChatGPT instance created with model: " + this.model);
    },

    setPremise: function(premise) {
        try {
            this.logDebug("Setting premise: " + premise);
            return this.createMessage("system", premise);
        } catch (ex) {
            var exception_message = ex.getMessage();
            gs.error(exception_message, "ChatGPT");
        }
    },

    createMessage: function(role, content) {
        try {
            this.logDebug("Creating message with role: " + role + " and content: " + content);
            return {
                "role": role,
                "content": content
            };
        } catch (ex) {
            var exception_message = ex.getMessage();
            gs.error(exception_message, "ChatGPT");
        }
    },

    submitChat: function(messages) {
        try {
            this.logDebug("Submitting chat messages: " + JSON.stringify(messages));
            var request = new sn_ws.RESTMessageV2("ChatGPT", "POST");
            request.setHttpMethod('POST');

            var payload = {
                "model": this.model,
                "messages": messages,
                "temperature": 0.7
            };

            this.logDebug("Payload: " + JSON.stringify(payload));
            request.setRequestBody(JSON.stringify(payload));

            var response = request.execute();
            var httpResponseStatus = response.getStatusCode();
            var httpResponseContentType = response.getHeader('Content-Type');

            if (httpResponseStatus === 200 && httpResponseContentType === 'application/json') {
                this.logDebug("ChatGPT API call was successful");
                this.logDebug("ChatGPT Response was: " + response.getBody());
                return response.getBody();
            } else {
                gs.error('Error calling the ChatGPT API. HTTP Status: ' + httpResponseStatus, "ChatGPT");
            }
        } catch (ex) {
            var exception_message = ex.getMessage();
            gs.error(exception_message, "ChatGPT");
        }
    },

    extractAssistantMessage: function(apiResponse) {
        try {
            var apiResponseObject = JSON.parse(apiResponse);

            if (apiResponseObject.choices && apiResponseObject.choices[0] && apiResponseObject.choices[0].message && apiResponseObject.choices[0].message.content) {
                this.logDebug("Extracted assistant message: " + apiResponseObject.choices[0].message.content);
                return apiResponseObject.choices[0].message.content;
            } else {
                gs.error("No message found in the API response.", "ChatGPT");
                return null;
            }
        } catch (ex) {
            var exception_message = ex.getMessage();
            gs.error(exception_message, "ChatGPT");
        }
    },

    extractCodeBlocks: function(assistantMessage) {
        try {
            if (!assistantMessage) {
                gs.error("Assistant message is null or undefined", "ChatGPT");
                return null;
            }

            if (typeof(assistantMessage) == "string")
                assistantMessage = JSON.parse(assistantMessage);

            var code = assistantMessage.code;

            if (!code) {
                gs.error("No code found in the assistant message.", "ChatGPT");
                return null;
            }

            return code;
        } catch (ex) {
            var exception_message = ex.getMessage();
            gs.error(exception_message, "ChatGPT");
        }
    },

    createScript: function(scriptJson) {
        try {
            if (typeof(scriptJson) == "string")
                scriptJson = JSON.parse(scriptJson);

            if (!scriptJson.name || !scriptJson.code || !scriptJson.notes || !scriptJson.table) {
                gs.error("JSON is missing required properties", "ChatGPT");
                return null;
            }

            this.logDebug("Creating script with name: " + scriptJson.name);

            var gr = new GlideRecord(scriptJson.table);
            gr.initialize();
            gr.setValue('name', scriptJson.name);
            gr.setValue('script', scriptJson.code);
            gr.setValue('description', scriptJson.notes);
            var sys_id = gr.insert();

            if (sys_id) {
                this.logDebug("Script created with sys_id: " + sys_id);
                return sys_id;
            } else {
                gs.error("Failed to create script", "ChatGPT");
                return null;
            }
        } catch (e) {
            gs.error("Failed to parse script JSON: " + e.message, "ChatGPT");
            return null;
        }
    },

    logDebug: function(log_message) {
        if (this.debug) {
            gs.log(log_message, "ChatGPT");
        }
    },

    type: 'ChatGPT'
};

ServiceNow – ChatGPT Integration

ServiceNow have just started offering some tools for ChatGPT integration. Some of these fall under their IntegrationHub Pro offering. Its well worth checking out the new official options in my opinion.

I thought I would try to setup my own integration with ChatGPT on a personal instance a while ago and just got round to it. I thought I’d document the process here for if anyone was interested.

I’ll write a few of these articles as there was an idea that I had which I thought might be useful. What I am trying to achieve is the ability to ask ChatGPT to write a script, then have ServiceNow create the script on the platform.

To do the initial setup, do the following:

Create a ChatGPT API Key

Open the following link and create an API key. https://platform.openai.com/account/api-keys

As an FYI, the key is separate from any ChatGPT plus subscription you might have – it will likely come under a new billing process. Once you have created the key, note it down and continue on with creating a REST message.

Create a new REST Message

In ServiceNow, open “REST Message” under System Web Services.

Create a new REST message. Enter the following details:

  • Name: ChatGPT
  • Endpoint: https://api.openai.com/v1/chat/completions
  • Open the “HTTP Request” tab. Create two new HTTP headers as follows:
NameValueExample
AuthorizationBearer [API Key]Bearer sk-xyzxxxxxxxxxxx
Content-Typeapplication/json

Create a new “HTTP Method” with the following details. You can delete the default GET.

NameHttp MethodEndpoint
POSTPOSThttps://api.openai.com/v1/chat/completions

You should now have the bones in place to send the messages, now we need to write some code to submit the requests.

Create Script Include

We will now create a Script Include that can be used to process ChatGPT requests. Below is the initial code I have used.

NameAPI Name (automatically generated)
ChatGPTglobal.ChatGPT
var ChatGPT = Class.create();
ChatGPT.prototype = {
    initialize: function() {
        this.model = "gpt-3.5-turbo";
        // Uncomment the following line if you want to use "gpt-4" model
        // this.model = "gpt-4"; // Note: There is a waitlist for this.

        gs.info("ChatGPT instance created with model: " + this.model, "ChatGPT");
    },

    // Sets the premise for the chat
    setPremise: function(premise) {
        gs.info("Setting premise: " + premise, "ChatGPT");
        return this.createMessage("system", premise);
    },

    // Creates a message object with role and content
    createMessage: function(role, content) {
        gs.info("Creating message with role: " + role + " and content: " + content, "ChatGPT");
        return {
            "role": role,
            "content": content
        };
    },

    // Submits chat messages to the model applied in this script include
    submitChat: function(messages) {
        gs.info("Submitting chat messages: " + JSON.stringify(messages), "ChatGPT");

        try {
            // Create a new GlideHTTPRequest instance and set the endpoint URL
            var request = new sn_ws.RESTMessageV2("ChatGPT", "POST");
            request.setHttpMethod('POST');

            // Set the payload including model, messages, and temperature
            var payload = {
                "model": this.model,
                "messages": messages,
                "temperature": 0.7
            };

            // Log the payload for debugging purposes
            gs.info("Payload: " + JSON.stringify(payload), "ChatGPT");

            // Set the request body
            request.setRequestBody(JSON.stringify(payload));

            // Send the request
            var response = request.execute();

            // Get the response status and content type
            var httpResponseStatus = response.getStatusCode();
            var httpResponseContentType = response.getHeader('Content-Type');

            // If the request is successful and the content type is JSON
            if (httpResponseStatus === 200 && httpResponseContentType === 'application/json') {
                gs.info("ChatGPT API call was successful", "ChatGPT");
                return response.getBody();
            } else {
                gs.error('Error calling the ChatGPT API. HTTP Status: ' + httpResponseStatus, "ChatGPT");
            }
        } catch (ex) {
            // Log any exception that happens during the API call
            var exception_message = ex.getMessage();
            gs.error(exception_message, "ChatGPT");
        }
    },

    type: 'ChatGPT'
};

A bit around the functions:

FunctionNotes
setPremiseCan be used to set the premise of a conversation. For example, you could want ChatGPT to reply in a certain style, or in a certain format. The premise could be something like, “You are speaking to a non-technical user so any answers should be summarised for that audience”.
createMessageUsed to create the message you are about to send, with two variables; role and content. Generally this is to aid with conversational context which I’ll talk about in future. To use it, call the function with the role as “user” and the content as the message you want to sent.
submitChatThis function sends the message to the ChatGPT endpoint using the REST message we defined earlier. It takes an array of messages, so you can use the createMessage function and send that though, or use the setPremise function initially to set the premise of the chat and send a message after etc.

Testing the code

To test if the code works, you can create a fix script. Here is an example that sets the premise that ChatGPT is a comedian and we can ask for its thoughts on rainy weather.

// Create an instance of the ChatGPT class
var chatGPT = new global.ChatGPT();

// Set the premise for the chat with the assistant. The premise helps set the context of the conversation
var premise = chatGPT.setPremise("You are a comedian and you love to make people laugh. Your responses should be comedic");

// Create a user message asking the assistant to write a ServiceNow fix script to query for active users.
var message1 = chatGPT.createMessage("user", "What do you think about rainy weather?");

// Submit the chat to the GPT-3.5 Turbo model (default). The chat consists of the premise and the user's request.
// The 'submitChat' function accepts an array of messages which form a conversation.
var result = chatGPT.submitChat([premise, message1]);

// Print the result. This will be a JSON object as per the premise set for the chat.
gs.print(result);

You should have a payload like this:

{
    "model": "gpt-3.5-turbo",
    "messages":
    [
        {
            "role": "system",
            "content": "You are a comedian and you love to make people laugh. Your responses should be comedic"
        },
        {
            "role": "user",
            "content": "What do you think about rainy weather?"
        }
    ],
    "temperature": 0.7
}

You should get a response like this:

{
  "id": "chatcmpl-XXXXXXXXXXXXXXXX",
  "object": "chat.completion",
  "created": 1686498256,
  "model": "gpt-3.5-turbo-0301",
  "usage": {
    "prompt_tokens": 38,
    "completion_tokens": 56,
    "total_tokens": 94
  },
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Rainy weather? Oh, it's the perfect time to stay curled up in bed all day and pretend like you have a life. Plus, it's the only time you can use the excuse \"sorry, can't go out, it's raining\" to avoid social situations."
      },
      "finish_reason": "stop",
      "index": 0
    }
  ]
}

As you can see, ChatGPT sent a message back with the role of “assistant”. I hope this helps! I’ll be writing more articles around this with an aim to get the automatic code deployment working.

ChatGPT – Truncation of Responses

ChatGPT has already proven to be an incredibly powerful tool. People are launching phenomenal projects with it, showcasing its potential as a transformative technology. Some have even referred to it as the “iPhone moment” for AI, and I can’t help but agree.

That being said, like any tool, it does have its quirks. One common issue is the truncation of responses. For instance, while generating code, ChatGPT might abruptly cut off, prompting you to “regenerate response” and leading to the repetition of the same text.

If you encounter this, there are strategies to mitigate it. One simple method is to prompt the model with:

“Please continue.”

This command typically gets ChatGPT to carry on from where it left off. However, it’s worth noting that there can be peculiarities with this approach. If the AI halts during a code block, it may resume the code but not within the same code block.

A more effective workaround I’ve found involves pinpointing the most recent function it began to write (or any line of code you want to continue from) and instructing it to:

“Please continue from function [function name] onwards.”

This method usually results in the continuation of the code within a proper code block.