Compare commits

...

33 Commits

Author SHA1 Message Date
techknowlogick
dfad569e40 1.7.1 changelog (#5918) 2019-01-31 11:11:25 -05:00
techknowlogick
c3b67ff2f6 Disable redirect for i18n (#5910) (#5916) 2019-01-31 10:07:57 -05:00
Lanre Adelowo
5c30817b5f fix compare button on upstream repo leading to 404 (#5877) (#5914) 2019-01-31 09:55:39 -05:00
Lanre Adelowo
438848a2ca respect value of REQUIRE_SIGNIN_VIEW (#5901) (#5915) 2019-01-31 09:38:01 -05:00
Lunny Xiao
9d4aa78113 Fix bug when read public repo lfs file (#5913)
* fix bug when read public repo lfs file

* add comment on lfs permission check
2019-01-31 13:36:10 +00:00
zeripath
e5af93af20 Only allow local login if password is non-empty (#5906) (#5908) 2019-01-30 23:46:19 +02:00
Lauris BH
3f802a2846 Fix go-get URL generation (#5905) (#5907) 2019-01-30 23:29:44 +02:00
zeripath
0190d3c243 Prevent nil dereference in mailIssueCommentToParticipants (#5891, #5895) (#5894)
* Ensure issue.Poster is loaded in mailIssueCommentToParticipants (#5891)

Previous code could potentially dereference nil - this PR ensures
that the poster is loaded before dereferencing it.

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Also ensure the repo is loaded

Signed-off-by: Andrew Thornton <art27@cantab.net>
2019-01-29 22:44:00 +00:00
Lauris BH
4fe1a3050e When creating new repository fsck option should be enabled (#5817) (#5885) 2019-01-29 09:42:47 +08:00
zeripath
29799537a7 API: Fix null pointer in attempt to Sudo if not logged in (#5872) (#5884)
Backport of #5872 to v1.7

Signed-off-by: Andrew Thornton <art27@cantab.net>
2019-01-28 20:26:55 +00:00
Harshit Bansal
d3a334d99a Fix an error while adding a dependency via UI. (Backport #5862) (#5876)
Fixes: #5783.
2019-01-28 12:51:30 +00:00
yasuokav
28d9305ea3 Fix delete correct temp directory (#5840) 2019-01-25 02:33:15 -05:00
kolaente
8a9f5b3b50 Added docs for the tree api (#5835)
* Added docs for the tree api

* Updated swagger docs

* Added missing response definition

* Updated swagger docs

* Fixed swagger docs
2019-01-24 20:40:54 +02:00
Antoine GIRARD
f28e17473c Backport #5830 : Include Go toolchain to --version (#5832)
* Include Go version

* fix import order
2019-01-24 10:33:28 -05:00
Lauris BH
2c26521579 Request for public keys only if LDAP attribute is set (#5816) (#5819)
* Update go-ldap dependency

* Request for public keys only if attribute is set
2019-01-24 12:21:36 +02:00
Joona Hoikkala
f635041c98 Fix TLS errors when using acme/autocert for local connections (#5820) (#5826) 2019-01-24 09:48:02 +02:00
techknowlogick
3fa49f3780 1.7.0 changelog (#5802) 2019-01-22 21:21:46 +02:00
Lanre Adelowo
4577cddd28 Disallow empty titles (#5785) (#5794)
* add util method and tests

* make sure the title of an issue cannot be empty

* wiki title cannot be empty

* pull request title cannot be empty

* update to make use of the new util methof
2019-01-21 17:55:12 +02:00
techknowlogick
8da5237107 1.7.0-rc3 changelog (#5756) 2019-01-18 01:08:41 -05:00
techknowlogick
8006b1bc7a backport 1.6.4 changelog to 1.7 branch (#5741) 2019-01-16 14:43:06 +02:00
Julian Tölle
8d400320c6 fix: use correct value for "MSpan Structures Obtained" #4742 (#5706) (#5716)
Signed-off-by: Julian Tölle <julian.toelle97@gmail.com>
2019-01-13 16:32:55 +02:00
zeripath
e9c4609410 Do not display the raw OpenID error in the UI (#5705) (#5712)
* Do not display the raw OpenID error in the UI

If there are no `WHITELIST_URIS` or `BLACKLIST_URIS` set in the openid
section of the app.ini, it is possible that gitea can leak sensitive
information about the local network through the error provided by the
UI. This PR hides the error information and logs it.

Fix #4973

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Update auth_openid.go

Place error log within the `err != nil` branch.
2019-01-13 08:05:20 -05:00
Zsombor
176a6048b4 Update xorm to fix issue #5659 and #5651 (#5680) (#5692) 2019-01-10 21:43:29 +02:00
Lunny Xiao
483aa06b07 fix public will not be reused as public key after deleting as deploy key (#5671) (#5684) 2019-01-10 09:23:33 -05:00
zeripath
551dc58a4d When redirecting clean the path to avoid redirecting to //www.othersite.com (#5669) (#5679)
Fix #5627

Signed-off-by: Andrew Thornton <art27@cantab.net>
2019-01-09 17:32:49 -05:00
Julian
41a2bfe3ae Only count users own actions for heatmap contributions (#5647) (#5655)
Signed-off-by: Julian Tölle <julian.toelle97@gmail.com>
2019-01-06 22:16:55 +02:00
Julian
652e09fc3e fix commit page showing status for current default branch (#5650) (#5653)
Signed-off-by: Julian Tölle <julian.toelle97@gmail.com>
2019-01-06 19:11:49 +01:00
Harshit Bansal
c9b57a5135 Don't close issues via commits on non-default branch. (#5622) (#5643)
Adds a small check to close the issues only if the referencing commits
are on the default branch.

Fixes: #2314.
2019-01-05 22:04:02 +02:00
zeripath
2904d8d6aa Fix sqlite deadlock when assigning to a PR (#5640) (#5642)
* Fix sqlite deadlock when assigning to a PR

Fix 5639

Signed-off-by: Andrew Thornton <art27@cantab.net>

* More possible deadlocks found and fixed

Signed-off-by: Andrew Thornton <art27@cantab.net>
2019-01-05 10:18:17 -05:00
Jonas Franz
109fc7975b Add changelog for 1.6.3 and 1.7.0-rc2 (#5638)
Signed-off-by: Jonas Franz <info@jonasfranz.software>
2019-01-04 19:17:32 +01:00
zeripath
3ee3a4b595 SECURITY: protect DeleteFilePost et al with cleanUploadFileName (#5631) (#5635)
This commit wraps more of the TreePaths with cleanUploadFileName

Signed-off-by: Andrew Thornton <art27@cantab.net>
2019-01-04 17:41:30 +01:00
Lauris BH
14e218cbd1 Backport latest translation changes 2019-01-04 11:26:23 +02:00
0x5c
b5f4911afa Documentation: Clarity for HTTPS setups (#5626)
[https-setup]
- Made it clearer that HTTP redirection is possible
[config-cheat-sheet]
- Clarified the behavihour of the redirection-related config keys

Signed-off-by: Matti Ranta <matti@mdranta.net>
2019-01-03 18:53:51 -05:00
58 changed files with 718 additions and 203 deletions

View File

@@ -4,7 +4,32 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io). been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.7.0-rc1](https://github.com/go-gitea/gitea/releases/tag/v1.7.0) - 2019-01-02 ## [1.7.1](https://github.com/go-gitea/gitea/releases/tag/v1.7.1) - 2019-01-31
* SECURITY
* Disable redirect for i18n (#5910) (#5916)
* Only allow local login if password is non-empty (#5906) (#5908)
* Fix go-get URL generation (#5905) (#5907)
* BUGFIXES
* Fix TLS errors when using acme/autocert for local connections (#5820) (#5826)
* Request for public keys only if LDAP attribute is set (#5816) (#5819)
* Fix delete correct temp directory (#5840) (#5839)
* Fix an error while adding a dependency via UI (#5862) (#5876)
* Fix null pointer in attempt to Sudo if not logged in (#5872) (#5884)
* When creating new repository fsck option should be enabled (#5817) (#5885)
* Prevent nil dereference in mailIssueCommentToParticipants (#5891) (#5895) (#5894)
* Fix bug when read public repo lfs file (#5913) (#5912)
* Respect value of REQUIRE_SIGNIN_VIEW (#5901) (#5915)
* Fix compare button on upstream repo leading to 404 (#5877) (#5914)
* DOCS
* Added docs for the tree api (#5835)
* MISC
* Include Go toolchain to --version (#5832) (#5830)
## [1.7.0](https://github.com/go-gitea/gitea/releases/tag/v1.7.0) - 2019-01-22
* SECURITY
* Do not display the raw OpenID error in the UI (#5705) (#5712)
* When redirecting clean the path to avoid redirecting to external site (#5669) (#5679)
* Prevent DeleteFilePost doing arbitrary deletion (#5631)
* BREAKING * BREAKING
* Restrict permission check on repositories and fix some problems (#5314) * Restrict permission check on repositories and fix some problems (#5314)
* Show only opened milestones on issues page milestone filter (#5051) * Show only opened milestones on issues page milestone filter (#5051)
@@ -23,6 +48,13 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Give user a link to create PR after push (#4716) * Give user a link to create PR after push (#4716)
* Add rebase with merge commit merge style (#3844) (#4052) * Add rebase with merge commit merge style (#3844) (#4052)
* BUGFIXES * BUGFIXES
* Disallow empty titles (#5785) (#5794)
* Fix sqlite deadlock when assigning to a PR (#5640) (#5642)
* Don't close issues via commits on non-default branch. (#5622) (#5643)
* Fix commit page showing status for current default branch (#5650) (#5653)
* Only count users own actions for heatmap contributions (#5647) (#5655)
* Update xorm to fix issue postgresql dumping issues (#5680) (#5692)
* Use correct value for "MSpan Structures Obtained" (#5706) (#5716)
* Fix bug on modifying sshd username (#5624) * Fix bug on modifying sshd username (#5624)
* Delete tags in mirror which are removed for original repo. (#5609) * Delete tags in mirror which are removed for original repo. (#5609)
* Fix wrong text getting saved on editing second comment on an issue. (#5608) * Fix wrong text getting saved on editing second comment on an issue. (#5608)
@@ -149,6 +181,18 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Git-Trees API (#5403) * Git-Trees API (#5403)
* Only chown directories during docker setup if necessary. Fix #4425 (#5064) * Only chown directories during docker setup if necessary. Fix #4425 (#5064)
## [1.6.4](https://github.com/go-gitea/gitea/releases/tag/v1.6.4) - 2019-01-15
* BUGFIX
* Fix SSH key now can be reused as public key after deleting as deploy key (#5671) (#5685)
* When redirecting clean the path to avoid redirecting to external site (#5669) (#5703)
* Fix to use correct value for "MSpan Structures Obtained" (#5706) (#5715)
## [1.6.3](https://github.com/go-gitea/gitea/releases/tag/v1.6.3) - 2019-01-04
* SECURITY
* Prevent DeleteFilePost doing arbitrary deletion (#5631)
* BUGFIX
* Fix wrong text getting saved on editing second comment on an issue (#5608)
## [1.6.2](https://github.com/go-gitea/gitea/releases/tag/v1.6.2) - 2018-12-21 ## [1.6.2](https://github.com/go-gitea/gitea/releases/tag/v1.6.2) - 2018-12-21
* SECURITY * SECURITY
* Sanitize uploaded file names (#5571) (#5573) * Sanitize uploaded file names (#5571) (#5573)

11
Gopkg.lock generated
View File

@@ -406,11 +406,11 @@
version = "v0.6.0" version = "v0.6.0"
[[projects]] [[projects]]
digest = "1:931a62a1aacc37a5e4c309a111642ec4da47b4dc453cd4ba5481b12eedb04a5d" digest = "1:d366480c27ab51b3f7e995f25503063e7a6ebc7feb269df2499c33471f35cd62"
name = "github.com/go-xorm/xorm" name = "github.com/go-xorm/xorm"
packages = ["."] packages = ["."]
pruneopts = "NUT" pruneopts = "NUT"
revision = "401f4ee8ff8cbc40a4754cb12192fbe4f02f3979" revision = "1cd2662be938bfee0e34af92fe448513e0560fb1"
[[projects]] [[projects]]
branch = "master" branch = "master"
@@ -1005,12 +1005,12 @@
version = "v1.31.1" version = "v1.31.1"
[[projects]] [[projects]]
digest = "1:01f4ac37c52bda6f7e1bd73680a99f88733c0408aaa159ecb1ba53a1ade9423c" digest = "1:7e1c00b9959544fa1ccca7cf0407a5b29ac6d5201059c4fac6f599cb99bfd24d"
name = "gopkg.in/ldap.v2" name = "gopkg.in/ldap.v2"
packages = ["."] packages = ["."]
pruneopts = "NUT" pruneopts = "NUT"
revision = "d0a5ced67b4dc310b9158d63a2c6f9c5ec13f105" revision = "bb7a9ca6e4fbc2129e3db588a34bc970ffe811a9"
version = "v2.4.1" version = "v2.5.1"
[[projects]] [[projects]]
digest = "1:cfe1730a152ff033ad7d9c115d22e36b19eec6d5928c06146b9119be45d39dc0" digest = "1:cfe1730a152ff033ad7d9c115d22e36b19eec6d5928c06146b9119be45d39dc0"
@@ -1173,6 +1173,7 @@
"github.com/keybase/go-crypto/openpgp", "github.com/keybase/go-crypto/openpgp",
"github.com/keybase/go-crypto/openpgp/armor", "github.com/keybase/go-crypto/openpgp/armor",
"github.com/keybase/go-crypto/openpgp/packet", "github.com/keybase/go-crypto/openpgp/packet",
"github.com/klauspost/compress/gzip",
"github.com/lafriks/xormstore", "github.com/lafriks/xormstore",
"github.com/lib/pq", "github.com/lib/pq",
"github.com/lunny/dingtalk_webhook", "github.com/lunny/dingtalk_webhook",

View File

@@ -38,7 +38,7 @@ ignored = ["google.golang.org/appengine*"]
[[override]] [[override]]
name = "github.com/go-xorm/xorm" name = "github.com/go-xorm/xorm"
revision = "401f4ee8ff8cbc40a4754cb12192fbe4f02f3979" revision = "1cd2662be938bfee0e34af92fe448513e0560fb1"
[[override]] [[override]]
name = "github.com/go-xorm/builder" name = "github.com/go-xorm/builder"

View File

@@ -9,10 +9,11 @@ package cmd
import ( import (
"errors" "errors"
"fmt" "fmt"
"strings"
"code.gitea.io/gitea/models" "code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/util"
"github.com/urfave/cli" "github.com/urfave/cli"
) )
@@ -24,7 +25,7 @@ func argsSet(c *cli.Context, args ...string) error {
return errors.New(a + " is not set") return errors.New(a + " is not set")
} }
if len(strings.TrimSpace(c.String(a))) == 0 { if util.IsEmptyString(a) {
return errors.New(a + " is required") return errors.New(a + " is required")
} }
} }

View File

@@ -122,9 +122,8 @@ Values containing `#` or `;` must be quoted using `` ` `` or `"""`.
- `LFS_CONTENT_PATH`: **./data/lfs**: Where to store LFS files. - `LFS_CONTENT_PATH`: **./data/lfs**: Where to store LFS files.
- `LFS_JWT_SECRET`: **\<empty\>**: LFS authentication secret, change this a unique string. - `LFS_JWT_SECRET`: **\<empty\>**: LFS authentication secret, change this a unique string.
- `LFS_HTTP_AUTH_EXPIRY`: **20m**: LFS authentication validity period in time.Duration, pushes taking longer than this may fail. - `LFS_HTTP_AUTH_EXPIRY`: **20m**: LFS authentication validity period in time.Duration, pushes taking longer than this may fail.
- `REDIRECT_OTHER_PORT`: **false**: If true and `PROTOCOL` is https, redirects http requests - `REDIRECT_OTHER_PORT`: **false**: If true and `PROTOCOL` is https, allows redirecting http requests on `PORT_TO_REDIRECT` to the https port Gitea listens on.
on another (https) port. - `PORT_TO_REDIRECT`: **80**: Port for the http redirection service to listen on. Used when `REDIRECT_OTHER_PORT` is true.
- `PORT_TO_REDIRECT`: **80**: Port used when `REDIRECT_OTHER_PORT` is true.
- `ENABLE_LETSENCRYPT`: **false**: If enabled you must set `DOMAIN` to valid internet facing domain (ensure DNS is set and port 80 is accessible by letsencrypt validation server). - `ENABLE_LETSENCRYPT`: **false**: If enabled you must set `DOMAIN` to valid internet facing domain (ensure DNS is set and port 80 is accessible by letsencrypt validation server).
By using Lets Encrypt **you must consent** to their [terms of service](https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf). By using Lets Encrypt **you must consent** to their [terms of service](https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf).
- `LETSENCRYPT_ACCEPTTOS`: **false**: This is an explicit check that you accept the terms of service for Let's Encrypt. - `LETSENCRYPT_ACCEPTTOS`: **false**: This is an explicit check that you accept the terms of service for Let's Encrypt.

View File

@@ -30,8 +30,22 @@ HTTP_PORT = 3000
CERT_FILE = cert.pem CERT_FILE = cert.pem
KEY_FILE = key.pem KEY_FILE = key.pem
``` ```
To learn more about the config values, please checkout the [Config Cheat Sheet](../config-cheat-sheet#server). To learn more about the config values, please checkout the [Config Cheat Sheet](../config-cheat-sheet#server).
### Setting-up HTTP redirection
The Gitea server is only able to listen to one port; to redirect HTTP requests to the HTTPS port, you will need to enable the HTTP redirection service:
```ini
[server]
REDIRECT_OTHER_PORT = true
; Port the redirection service should listen on
PORT_TO_REDIRECT = 3080
```
If you are using Docker, make sure that this port is configured in your `docker-compose.yml` file.
## Using Let's Encrypt ## Using Let's Encrypt
[Let's Encrypt](https://letsencrypt.org/) is a Certificate Authority that allows you to automatically request and renew SSL/TLS certificates. In addition to starting Gitea on your configured port, to request HTTPS certificates Gitea will also need to listed on port 80, and will set up an autoredirect to HTTPS for you. Let's Encrypt will need to be able to access Gitea via the Internet to verify your ownership of the domain. [Let's Encrypt](https://letsencrypt.org/) is a Certificate Authority that allows you to automatically request and renew SSL/TLS certificates. In addition to starting Gitea on your configured port, to request HTTPS certificates Gitea will also need to listed on port 80, and will set up an autoredirect to HTTPS for you. Let's Encrypt will need to be able to access Gitea via the Internet to verify your ownership of the domain.

View File

@@ -8,6 +8,7 @@ package main // import "code.gitea.io/gitea"
import ( import (
"os" "os"
"runtime"
"strings" "strings"
"code.gitea.io/gitea/cmd" "code.gitea.io/gitea/cmd"
@@ -61,8 +62,8 @@ arguments - which can alternatively be run by running the subcommand web.`
func formatBuiltWith(Tags string) string { func formatBuiltWith(Tags string) string {
if len(Tags) == 0 { if len(Tags) == 0 {
return "" return " built with " + runtime.Version()
} }
return " built with: " + strings.Replace(Tags, " ", ", ", -1) return " built with " + runtime.Version() + " : " + strings.Replace(Tags, " ", ", ", -1)
} }

View File

@@ -476,8 +476,34 @@ func getIssueFromRef(repo *Repository, ref string) (*Issue, error) {
return issue, nil return issue, nil
} }
func changeIssueStatus(repo *Repository, doer *User, ref string, refMarked map[int64]bool, status bool) error {
issue, err := getIssueFromRef(repo, ref)
if err != nil {
return err
}
if issue == nil || refMarked[issue.ID] {
return nil
}
refMarked[issue.ID] = true
if issue.RepoID != repo.ID || issue.IsClosed == status {
return nil
}
issue.Repo = repo
if err = issue.ChangeStatus(doer, status); err != nil {
// Don't return an error when dependencies are open as this would let the push fail
if IsErrDependenciesLeft(err) {
return nil
}
return err
}
return nil
}
// UpdateIssuesCommit checks if issues are manipulated by commit message. // UpdateIssuesCommit checks if issues are manipulated by commit message.
func UpdateIssuesCommit(doer *User, repo *Repository, commits []*PushCommit) error { func UpdateIssuesCommit(doer *User, repo *Repository, commits []*PushCommit, branchName string) error {
// Commits are appended in the reverse order. // Commits are appended in the reverse order.
for i := len(commits) - 1; i >= 0; i-- { for i := len(commits) - 1; i >= 0; i-- {
c := commits[i] c := commits[i]
@@ -500,51 +526,21 @@ func UpdateIssuesCommit(doer *User, repo *Repository, commits []*PushCommit) err
} }
} }
// Change issue status only if the commit has been pushed to the default branch.
if repo.DefaultBranch != branchName {
continue
}
refMarked = make(map[int64]bool) refMarked = make(map[int64]bool)
// FIXME: can merge this one and next one to a common function.
for _, ref := range issueCloseKeywordsPat.FindAllString(c.Message, -1) { for _, ref := range issueCloseKeywordsPat.FindAllString(c.Message, -1) {
issue, err := getIssueFromRef(repo, ref) if err := changeIssueStatus(repo, doer, ref, refMarked, true); err != nil {
if err != nil {
return err
}
if issue == nil || refMarked[issue.ID] {
continue
}
refMarked[issue.ID] = true
if issue.RepoID != repo.ID || issue.IsClosed {
continue
}
issue.Repo = repo
if err = issue.ChangeStatus(doer, true); err != nil {
// Don't return an error when dependencies are open as this would let the push fail
if IsErrDependenciesLeft(err) {
return nil
}
return err return err
} }
} }
// It is conflict to have close and reopen at same time, so refsMarked doesn't need to reinit here. // It is conflict to have close and reopen at same time, so refsMarked doesn't need to reinit here.
for _, ref := range issueReopenKeywordsPat.FindAllString(c.Message, -1) { for _, ref := range issueReopenKeywordsPat.FindAllString(c.Message, -1) {
issue, err := getIssueFromRef(repo, ref) if err := changeIssueStatus(repo, doer, ref, refMarked, false); err != nil {
if err != nil {
return err
}
if issue == nil || refMarked[issue.ID] {
continue
}
refMarked[issue.ID] = true
if issue.RepoID != repo.ID || !issue.IsClosed {
continue
}
issue.Repo = repo
if err = issue.ChangeStatus(doer, false); err != nil {
return err return err
} }
} }
@@ -609,7 +605,7 @@ func CommitRepoAction(opts CommitRepoActionOptions) error {
opts.Commits.CompareURL = repo.ComposeCompareURL(opts.OldCommitID, opts.NewCommitID) opts.Commits.CompareURL = repo.ComposeCompareURL(opts.OldCommitID, opts.NewCommitID)
} }
if err = UpdateIssuesCommit(pusher, repo, opts.Commits.Commits); err != nil { if err = UpdateIssuesCommit(pusher, repo, opts.Commits.Commits, refName); err != nil {
log.Error(4, "updateIssuesCommit: %v", err) log.Error(4, "updateIssuesCommit: %v", err)
} }
} }

View File

@@ -227,10 +227,37 @@ func TestUpdateIssuesCommit(t *testing.T) {
AssertNotExistsBean(t, commentBean) AssertNotExistsBean(t, commentBean)
AssertNotExistsBean(t, &Issue{RepoID: repo.ID, Index: 2}, "is_closed=1") AssertNotExistsBean(t, &Issue{RepoID: repo.ID, Index: 2}, "is_closed=1")
assert.NoError(t, UpdateIssuesCommit(user, repo, pushCommits)) assert.NoError(t, UpdateIssuesCommit(user, repo, pushCommits, repo.DefaultBranch))
AssertExistsAndLoadBean(t, commentBean) AssertExistsAndLoadBean(t, commentBean)
AssertExistsAndLoadBean(t, issueBean, "is_closed=1") AssertExistsAndLoadBean(t, issueBean, "is_closed=1")
CheckConsistencyFor(t, &Action{}) CheckConsistencyFor(t, &Action{})
// Test that push to a non-default branch closes no issue.
pushCommits = []*PushCommit{
{
Sha1: "abcdef1",
CommitterEmail: "user2@example.com",
CommitterName: "User Two",
AuthorEmail: "user4@example.com",
AuthorName: "User Four",
Message: "close #1",
},
}
repo = AssertExistsAndLoadBean(t, &Repository{ID: 3}).(*Repository)
commentBean = &Comment{
Type: CommentTypeCommitRef,
CommitSHA: "abcdef1",
PosterID: user.ID,
IssueID: 6,
}
issueBean = &Issue{RepoID: repo.ID, Index: 1}
AssertNotExistsBean(t, commentBean)
AssertNotExistsBean(t, &Issue{RepoID: repo.ID, Index: 1}, "is_closed=1")
assert.NoError(t, UpdateIssuesCommit(user, repo, pushCommits, "non-existing-branch"))
AssertExistsAndLoadBean(t, commentBean)
AssertNotExistsBean(t, issueBean, "is_closed=1")
CheckConsistencyFor(t, &Action{})
} }
func testCorrectRepoAction(t *testing.T, opts CommitRepoActionOptions, actionBean *Action) { func testCorrectRepoAction(t *testing.T, opts CommitRepoActionOptions, actionBean *Action) {

View File

@@ -1402,7 +1402,7 @@ func UpdateIssueMentions(e Engine, issueID int64, mentions []string) error {
} }
memberIDs := make([]int64, 0, user.NumMembers) memberIDs := make([]int64, 0, user.NumMembers)
orgUsers, err := GetOrgUsersByOrgID(user.ID) orgUsers, err := getOrgUsersByOrgID(e, user.ID)
if err != nil { if err != nil {
return fmt.Errorf("GetOrgUsersByOrgID [%d]: %v", user.ID, err) return fmt.Errorf("GetOrgUsersByOrgID [%d]: %v", user.ID, err)
} }

View File

@@ -44,7 +44,11 @@ func (issue *Issue) loadAssignees(e Engine) (err error) {
// GetAssigneesByIssue returns everyone assigned to that issue // GetAssigneesByIssue returns everyone assigned to that issue
func GetAssigneesByIssue(issue *Issue) (assignees []*User, err error) { func GetAssigneesByIssue(issue *Issue) (assignees []*User, err error) {
err = issue.loadAssignees(x) return getAssigneesByIssue(x, issue)
}
func getAssigneesByIssue(e Engine, issue *Issue) (assignees []*User, err error) {
err = issue.loadAssignees(e)
if err != nil { if err != nil {
return assignees, err return assignees, err
} }
@@ -173,7 +177,7 @@ func (issue *Issue) changeAssignee(sess *xorm.Session, doer *User, assigneeID in
issue.PullRequest.Issue = issue issue.PullRequest.Issue = issue
apiPullRequest := &api.PullRequestPayload{ apiPullRequest := &api.PullRequestPayload{
Index: issue.Index, Index: issue.Index,
PullRequest: issue.PullRequest.APIFormat(), PullRequest: issue.PullRequest.apiFormat(sess),
Repository: issue.Repo.innerAPIFormat(sess, mode, false), Repository: issue.Repo.innerAPIFormat(sess, mode, false),
Sender: doer.APIFormat(), Sender: doer.APIFormat(),
} }

View File

@@ -748,6 +748,9 @@ func createIssueDependencyComment(e *xorm.Session, doer *User, issue *Issue, dep
if !add { if !add {
cType = CommentTypeRemoveDependency cType = CommentTypeRemoveDependency
} }
if err = issue.loadRepo(e); err != nil {
return
}
// Make two comments, one in each issue // Make two comments, one in each issue
_, err = createComment(e, &CreateCommentOptions{ _, err = createComment(e, &CreateCommentOptions{

View File

@@ -19,11 +19,9 @@ func TestCreateIssueDependency(t *testing.T) {
issue1, err := GetIssueByID(1) issue1, err := GetIssueByID(1)
assert.NoError(t, err) assert.NoError(t, err)
issue1.LoadAttributes()
issue2, err := GetIssueByID(2) issue2, err := GetIssueByID(2)
assert.NoError(t, err) assert.NoError(t, err)
issue2.LoadAttributes()
// Create a dependency and check if it was successful // Create a dependency and check if it was successful
err = CreateIssueDependency(user1, issue1, issue2) err = CreateIssueDependency(user1, issue1, issue2)

View File

@@ -39,16 +39,16 @@ func mailIssueCommentToParticipants(e Engine, issue *Issue, doer *User, content
// In case the issue poster is not watching the repository and is active, // In case the issue poster is not watching the repository and is active,
// even if we have duplicated in watchers, can be safely filtered out. // even if we have duplicated in watchers, can be safely filtered out.
poster, err := getUserByID(e, issue.PosterID) err = issue.loadPoster(e)
if err != nil { if err != nil {
return fmt.Errorf("GetUserByID [%d]: %v", issue.PosterID, err) return fmt.Errorf("GetUserByID [%d]: %v", issue.PosterID, err)
} }
if issue.PosterID != doer.ID && poster.IsActive && !poster.ProhibitLogin { if issue.PosterID != doer.ID && issue.Poster.IsActive && !issue.Poster.ProhibitLogin {
participants = append(participants, issue.Poster) participants = append(participants, issue.Poster)
} }
// Assignees must receive any communications // Assignees must receive any communications
assignees, err := GetAssigneesByIssue(issue) assignees, err := getAssigneesByIssue(e, issue)
if err != nil { if err != nil {
return err return err
} }
@@ -88,6 +88,10 @@ func mailIssueCommentToParticipants(e Engine, issue *Issue, doer *User, content
names = append(names, participants[i].Name) names = append(names, participants[i].Name)
} }
if err := issue.loadRepo(e); err != nil {
return err
}
for _, to := range tos { for _, to := range tos {
SendIssueCommentMail(issue, doer, content, comment, []string{to}) SendIssueCommentMail(issue, doer, content, comment, []string{to})
} }

View File

@@ -54,7 +54,7 @@ func newIssueUsers(e Engine, repo *Repository, issue *Issue) error {
func updateIssueAssignee(e *xorm.Session, issue *Issue, assigneeID int64) (removed bool, err error) { func updateIssueAssignee(e *xorm.Session, issue *Issue, assigneeID int64) (removed bool, err error) {
// Check if the user exists // Check if the user exists
assignee, err := GetUserByID(assigneeID) assignee, err := getUserByID(e, assigneeID)
if err != nil { if err != nil {
return false, err return false, err
} }

View File

@@ -644,7 +644,7 @@ func UserSignIn(username, password string) (*User, error) {
if hasUser { if hasUser {
switch user.LoginType { switch user.LoginType {
case LoginNoType, LoginPlain, LoginOAuth2: case LoginNoType, LoginPlain, LoginOAuth2:
if user.ValidatePassword(password) { if user.IsPasswordSet() && user.ValidatePassword(password) {
return user, nil return user, nil
} }

View File

@@ -393,8 +393,12 @@ func GetOrgUsersByUserID(uid int64, all bool) ([]*OrgUser, error) {
// GetOrgUsersByOrgID returns all organization-user relations by organization ID. // GetOrgUsersByOrgID returns all organization-user relations by organization ID.
func GetOrgUsersByOrgID(orgID int64) ([]*OrgUser, error) { func GetOrgUsersByOrgID(orgID int64) ([]*OrgUser, error) {
return getOrgUsersByOrgID(x, orgID)
}
func getOrgUsersByOrgID(e Engine, orgID int64) ([]*OrgUser, error) {
ous := make([]*OrgUser, 0, 10) ous := make([]*OrgUser, 0, 10)
err := x. err := e.
Where("org_id=?", orgID). Where("org_id=?", orgID).
Find(&ous) Find(&ous)
return ous, err return ous, err

View File

@@ -366,7 +366,7 @@ func (pr *PullRequest) Merge(doer *User, baseGitRepo *git.Repository, mergeStyle
return fmt.Errorf("Failed to create dir %s: %v", tmpBasePath, err) return fmt.Errorf("Failed to create dir %s: %v", tmpBasePath, err)
} }
defer os.RemoveAll(path.Dir(tmpBasePath)) defer os.RemoveAll(tmpBasePath)
var stderr string var stderr string
if _, stderr, err = process.GetManager().ExecTimeout(5*time.Minute, if _, stderr, err = process.GetManager().ExecTimeout(5*time.Minute,

View File

@@ -11,6 +11,7 @@ import (
"fmt" "fmt"
"html/template" "html/template"
"io/ioutil" "io/ioutil"
"net/url"
"os" "os"
"os/exec" "os/exec"
"path" "path"
@@ -824,7 +825,7 @@ type CloneLink struct {
// ComposeHTTPSCloneURL returns HTTPS clone URL based on given owner and repository name. // ComposeHTTPSCloneURL returns HTTPS clone URL based on given owner and repository name.
func ComposeHTTPSCloneURL(owner, repo string) string { func ComposeHTTPSCloneURL(owner, repo string) string {
return fmt.Sprintf("%s%s/%s.git", setting.AppURL, owner, repo) return fmt.Sprintf("%s%s/%s.git", setting.AppURL, url.QueryEscape(owner), url.QueryEscape(repo))
} }
func (repo *Repository) cloneLink(e Engine, isWiki bool) *CloneLink { func (repo *Repository) cloneLink(e Engine, isWiki bool) *CloneLink {
@@ -1359,12 +1360,13 @@ func CreateRepository(doer, u *User, opts CreateRepoOptions) (_ *Repository, err
} }
repo := &Repository{ repo := &Repository{
OwnerID: u.ID, OwnerID: u.ID,
Owner: u, Owner: u,
Name: opts.Name, Name: opts.Name,
LowerName: strings.ToLower(opts.Name), LowerName: strings.ToLower(opts.Name),
Description: opts.Description, Description: opts.Description,
IsPrivate: opts.IsPrivate, IsPrivate: opts.IsPrivate,
IsFsckEnabled: true,
} }
sess := x.NewSession() sess := x.NewSession()

View File

@@ -113,15 +113,15 @@ func notifyWatchers(e Engine, act *Action) error {
switch act.OpType { switch act.OpType {
case ActionCommitRepo, ActionPushTag, ActionDeleteTag, ActionDeleteBranch: case ActionCommitRepo, ActionPushTag, ActionDeleteTag, ActionDeleteBranch:
if !act.Repo.CheckUnitUser(act.UserID, false, UnitTypeCode) { if !act.Repo.checkUnitUser(e, act.UserID, false, UnitTypeCode) {
continue continue
} }
case ActionCreateIssue, ActionCommentIssue, ActionCloseIssue, ActionReopenIssue: case ActionCreateIssue, ActionCommentIssue, ActionCloseIssue, ActionReopenIssue:
if !act.Repo.CheckUnitUser(act.UserID, false, UnitTypeIssues) { if !act.Repo.checkUnitUser(e, act.UserID, false, UnitTypeIssues) {
continue continue
} }
case ActionCreatePullRequest, ActionMergePullRequest, ActionClosePullRequest, ActionReopenPullRequest: case ActionCreatePullRequest, ActionMergePullRequest, ActionClosePullRequest, ActionReopenPullRequest:
if !act.Repo.CheckUnitUser(act.UserID, false, UnitTypePullRequests) { if !act.Repo.checkUnitUser(e, act.UserID, false, UnitTypePullRequests) {
continue continue
} }
} }

View File

@@ -844,6 +844,11 @@ func DeleteDeployKey(doer *User, id int64) error {
if err = deletePublicKeys(sess, key.KeyID); err != nil { if err = deletePublicKeys(sess, key.KeyID); err != nil {
return err return err
} }
// after deleted the public keys, should rewrite the public keys file
if err = rewriteAllPublicKeys(sess); err != nil {
return err
}
} }
return sess.Commit() return sess.Commit()

View File

@@ -32,12 +32,22 @@ func GetUserHeatmapDataByUser(user *User) ([]*UserHeatmapData, error) {
groupByName = groupBy groupByName = groupBy
} }
err := x.Select(groupBy+" AS timestamp, count(user_id) as contributions"). sess := x.Select(groupBy+" AS timestamp, count(user_id) as contributions").
Table("action"). Table("action").
Where("user_id = ?", user.ID). Where("user_id = ?", user.ID).
And("created_unix > ?", (util.TimeStampNow() - 31536000)). And("created_unix > ?", (util.TimeStampNow() - 31536000))
GroupBy(groupByName).
// * Heatmaps for individual users only include actions that the user themself
// did.
// * For organizations actions by all users that were made in owned
// repositories are counted.
if user.Type == UserTypeIndividual {
sess = sess.And("act_user_id = ?", user.ID)
}
err := sess.GroupBy(groupByName).
OrderBy("timestamp"). OrderBy("timestamp").
Find(&hdata) Find(&hdata)
return hdata, err return hdata, err
} }

View File

@@ -247,11 +247,17 @@ func (ls *Source) SearchEntry(name, passwd string, directBind bool) *SearchResul
return nil return nil
} }
var isAttributeSSHPublicKeySet = len(strings.TrimSpace(ls.AttributeSSHPublicKey)) > 0
attribs := []string{ls.AttributeUsername, ls.AttributeName, ls.AttributeSurname, ls.AttributeMail}
if isAttributeSSHPublicKeySet {
attribs = append(attribs, ls.AttributeSSHPublicKey)
}
log.Trace("Fetching attributes '%v', '%v', '%v', '%v', '%v' with filter %s and base %s", ls.AttributeUsername, ls.AttributeName, ls.AttributeSurname, ls.AttributeMail, ls.AttributeSSHPublicKey, userFilter, userDN) log.Trace("Fetching attributes '%v', '%v', '%v', '%v', '%v' with filter %s and base %s", ls.AttributeUsername, ls.AttributeName, ls.AttributeSurname, ls.AttributeMail, ls.AttributeSSHPublicKey, userFilter, userDN)
search := ldap.NewSearchRequest( search := ldap.NewSearchRequest(
userDN, ldap.ScopeWholeSubtree, ldap.NeverDerefAliases, 0, 0, false, userFilter, userDN, ldap.ScopeWholeSubtree, ldap.NeverDerefAliases, 0, 0, false, userFilter,
[]string{ls.AttributeUsername, ls.AttributeName, ls.AttributeSurname, ls.AttributeMail, ls.AttributeSSHPublicKey}, attribs, nil)
nil)
sr, err := l.Search(search) sr, err := l.Search(search)
if err != nil { if err != nil {
@@ -267,11 +273,15 @@ func (ls *Source) SearchEntry(name, passwd string, directBind bool) *SearchResul
return nil return nil
} }
var sshPublicKey []string
username := sr.Entries[0].GetAttributeValue(ls.AttributeUsername) username := sr.Entries[0].GetAttributeValue(ls.AttributeUsername)
firstname := sr.Entries[0].GetAttributeValue(ls.AttributeName) firstname := sr.Entries[0].GetAttributeValue(ls.AttributeName)
surname := sr.Entries[0].GetAttributeValue(ls.AttributeSurname) surname := sr.Entries[0].GetAttributeValue(ls.AttributeSurname)
mail := sr.Entries[0].GetAttributeValue(ls.AttributeMail) mail := sr.Entries[0].GetAttributeValue(ls.AttributeMail)
sshPublicKey := sr.Entries[0].GetAttributeValues(ls.AttributeSSHPublicKey) if isAttributeSSHPublicKeySet {
sshPublicKey = sr.Entries[0].GetAttributeValues(ls.AttributeSSHPublicKey)
}
isAdmin := checkAdmin(l, ls, userDN) isAdmin := checkAdmin(l, ls, userDN)
if !directBind && ls.AttributesInBind { if !directBind && ls.AttributesInBind {
@@ -320,11 +330,17 @@ func (ls *Source) SearchEntries() []*SearchResult {
userFilter := fmt.Sprintf(ls.Filter, "*") userFilter := fmt.Sprintf(ls.Filter, "*")
var isAttributeSSHPublicKeySet = len(strings.TrimSpace(ls.AttributeSSHPublicKey)) > 0
attribs := []string{ls.AttributeUsername, ls.AttributeName, ls.AttributeSurname, ls.AttributeMail}
if isAttributeSSHPublicKeySet {
attribs = append(attribs, ls.AttributeSSHPublicKey)
}
log.Trace("Fetching attributes '%v', '%v', '%v', '%v', '%v' with filter %s and base %s", ls.AttributeUsername, ls.AttributeName, ls.AttributeSurname, ls.AttributeMail, ls.AttributeSSHPublicKey, userFilter, ls.UserBase) log.Trace("Fetching attributes '%v', '%v', '%v', '%v', '%v' with filter %s and base %s", ls.AttributeUsername, ls.AttributeName, ls.AttributeSurname, ls.AttributeMail, ls.AttributeSSHPublicKey, userFilter, ls.UserBase)
search := ldap.NewSearchRequest( search := ldap.NewSearchRequest(
ls.UserBase, ldap.ScopeWholeSubtree, ldap.NeverDerefAliases, 0, 0, false, userFilter, ls.UserBase, ldap.ScopeWholeSubtree, ldap.NeverDerefAliases, 0, 0, false, userFilter,
[]string{ls.AttributeUsername, ls.AttributeName, ls.AttributeSurname, ls.AttributeMail, ls.AttributeSSHPublicKey}, attribs, nil)
nil)
var sr *ldap.SearchResult var sr *ldap.SearchResult
if ls.UsePagedSearch() { if ls.UsePagedSearch() {
@@ -341,12 +357,14 @@ func (ls *Source) SearchEntries() []*SearchResult {
for i, v := range sr.Entries { for i, v := range sr.Entries {
result[i] = &SearchResult{ result[i] = &SearchResult{
Username: v.GetAttributeValue(ls.AttributeUsername), Username: v.GetAttributeValue(ls.AttributeUsername),
Name: v.GetAttributeValue(ls.AttributeName), Name: v.GetAttributeValue(ls.AttributeName),
Surname: v.GetAttributeValue(ls.AttributeSurname), Surname: v.GetAttributeValue(ls.AttributeSurname),
Mail: v.GetAttributeValue(ls.AttributeMail), Mail: v.GetAttributeValue(ls.AttributeMail),
SSHPublicKey: v.GetAttributeValues(ls.AttributeSSHPublicKey), IsAdmin: checkAdmin(l, ls, v.DN),
IsAdmin: checkAdmin(l, ls, v.DN), }
if isAttributeSSHPublicKeySet {
result[i].SSHPublicKey = v.GetAttributeValues(ls.AttributeSSHPublicKey)
} }
} }

View File

@@ -209,7 +209,7 @@ func Contexter() macaron.Handler {
if err == nil && len(repo.DefaultBranch) > 0 { if err == nil && len(repo.DefaultBranch) > 0 {
branchName = repo.DefaultBranch branchName = repo.DefaultBranch
} }
prefix := setting.AppURL + path.Join(ownerName, repoName, "src", "branch", branchName) prefix := setting.AppURL + path.Join(url.QueryEscape(ownerName), url.QueryEscape(repoName), "src", "branch", branchName)
c.Header().Set("Content-Type", "text/html") c.Header().Set("Content-Type", "text/html")
c.WriteHeader(http.StatusOK) c.WriteHeader(http.StatusOK)
c.Write([]byte(com.Expand(`<!doctype html> c.Write([]byte(com.Expand(`<!doctype html>

View File

@@ -8,6 +8,7 @@ package context
import ( import (
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net/url"
"path" "path"
"strings" "strings"
@@ -162,7 +163,7 @@ func RetrieveBaseRepo(ctx *Context, repo *models.Repository) {
// ComposeGoGetImport returns go-get-import meta content. // ComposeGoGetImport returns go-get-import meta content.
func ComposeGoGetImport(owner, repo string) string { func ComposeGoGetImport(owner, repo string) string {
return path.Join(setting.Domain, setting.AppSubURL, owner, repo) return path.Join(setting.Domain, setting.AppSubURL, url.QueryEscape(owner), url.QueryEscape(repo))
} }
// EarlyResponseForGoGetMeta responses appropriate go-get meta with status 200 // EarlyResponseForGoGetMeta responses appropriate go-get meta with status 200

View File

@@ -497,12 +497,15 @@ func authenticate(ctx *context.Context, repository *models.Repository, authoriza
accessMode = models.AccessModeWrite accessMode = models.AccessModeWrite
} }
// ctx.IsSigned is unnecessary here, this will be checked in perm.CanAccess
perm, err := models.GetUserRepoPermission(repository, ctx.User) perm, err := models.GetUserRepoPermission(repository, ctx.User)
if err != nil { if err != nil {
return false return false
} }
if ctx.IsSigned {
return perm.CanAccess(accessMode, models.UnitTypeCode) canRead := perm.CanAccess(accessMode, models.UnitTypeCode)
if canRead {
return true
} }
user, repo, opStr, err := parseToken(authorization) user, repo, opStr, err := parseToken(authorization)
@@ -582,7 +585,7 @@ func parseToken(authorization string) (*models.User, *models.Repository, string,
if err != nil { if err != nil {
return nil, nil, "basic", err return nil, nil, "basic", err
} }
if !u.ValidatePassword(password) { if !u.IsPasswordSet() || !u.ValidatePassword(password) {
return nil, nil, "basic", fmt.Errorf("Basic auth failed") return nil, nil, "basic", fmt.Errorf("Basic auth failed")
} }
return u, nil, "basic", nil return u, nil, "basic", nil

View File

@@ -39,6 +39,7 @@ func decodeJSONError(resp *http.Response) *Response {
func newInternalRequest(url, method string) *httplib.Request { func newInternalRequest(url, method string) *httplib.Request {
req := newRequest(url, method).SetTLSClientConfig(&tls.Config{ req := newRequest(url, method).SetTLSClientConfig(&tls.Config{
InsecureSkipVerify: true, InsecureSkipVerify: true,
ServerName: setting.Domain,
}) })
if setting.Protocol == setting.UnixSocket { if setting.Protocol == setting.UnixSocket {
req.SetTransport(&http.Transport{ req.SetTransport(&http.Transport{

View File

@@ -117,7 +117,7 @@ func (opts *Options) handle(ctx *macaron.Context, log *log.Logger, opt *Options)
if fi.IsDir() { if fi.IsDir() {
// Redirect if missing trailing slash. // Redirect if missing trailing slash.
if !strings.HasSuffix(ctx.Req.URL.Path, "/") { if !strings.HasSuffix(ctx.Req.URL.Path, "/") {
http.Redirect(ctx.Resp, ctx.Req.Request, ctx.Req.URL.Path+"/", http.StatusFound) http.Redirect(ctx.Resp, ctx.Req.Request, path.Clean(ctx.Req.URL.Path+"/"), http.StatusFound)
return true return true
} }

View File

@@ -98,3 +98,8 @@ func Min(a, b int) int {
} }
return a return a
} }
// IsEmptyString checks if the provided string is empty
func IsEmptyString(s string) bool {
return len(strings.TrimSpace(s)) == 0
}

View File

@@ -77,3 +77,20 @@ func TestIsExternalURL(t *testing.T) {
assert.Equal(t, test.Expected, IsExternalURL(test.RawURL)) assert.Equal(t, test.Expected, IsExternalURL(test.RawURL))
} }
} }
func TestIsEmptyString(t *testing.T) {
cases := []struct {
s string
expected bool
}{
{"", true},
{" ", true},
{" ", true},
{" a", false},
}
for _, v := range cases {
assert.Equal(t, v.expected, IsEmptyString(v.s))
}
}

View File

@@ -655,6 +655,7 @@ ext_issues.desc = Link to an external issue tracker.
issues.desc = Organize bug reports, tasks and milestones. issues.desc = Organize bug reports, tasks and milestones.
issues.new = New Issue issues.new = New Issue
issues.new.title_empty = Title cannot be empty
issues.new.labels = Labels issues.new.labels = Labels
issues.new.no_label = No Label issues.new.no_label = No Label
issues.new.clear_labels = Clear labels issues.new.clear_labels = Clear labels

View File

@@ -859,6 +859,7 @@ pulls.title_wip_desc=`<a href="#">Sāciet virsrakstu ar <strong>%s</strong></a>,
pulls.cannot_merge_work_in_progress=Šis izmaiņu pieprasījums ir atzīmēts, ka pie tā vēl notiek izstrāde. Noņemiet <strong>%s</strong> no virsraksta sākuma, kad tas ir pabeigts. pulls.cannot_merge_work_in_progress=Šis izmaiņu pieprasījums ir atzīmēts, ka pie tā vēl notiek izstrāde. Noņemiet <strong>%s</strong> no virsraksta sākuma, kad tas ir pabeigts.
pulls.data_broken=Izmaiņu pieprasījums ir bojāts, jo dzēsta informācija no atdalītā repozitorija. pulls.data_broken=Izmaiņu pieprasījums ir bojāts, jo dzēsta informācija no atdalītā repozitorija.
pulls.is_checking=Notiek konfliktu pārbaude, mirkli uzgaidiet un atjaunojiet lapu. pulls.is_checking=Notiek konfliktu pārbaude, mirkli uzgaidiet un atjaunojiet lapu.
pulls.blocked_by_approvals=Šim izmaiņu pieprasījumam nav nepieciešamais apstiprinājumu daudzums. %d no %d apstiprinājumi piešķirti.
pulls.can_auto_merge_desc=Šo izmaiņu pieprasījumu var automātiski sapludināt. pulls.can_auto_merge_desc=Šo izmaiņu pieprasījumu var automātiski sapludināt.
pulls.cannot_auto_merge_desc=Šis izmaiņu pieprasījums nevar tikt automātiski sapludināts konfliktu dēļ. pulls.cannot_auto_merge_desc=Šis izmaiņu pieprasījums nevar tikt automātiski sapludināts konfliktu dēļ.
pulls.cannot_auto_merge_helper=Sapludiniet manuāli, lai atrisinātu konfliktus. pulls.cannot_auto_merge_helper=Sapludiniet manuāli, lai atrisinātu konfliktus.
@@ -867,6 +868,7 @@ pulls.no_merge_helper=Lai sapludinātu šo izmaiņu pieprasījumu, iespējojiet
pulls.no_merge_wip=Šo izmaiņu pieprasījumu nav iespējams sapludināt, jo tas ir atzīmēts, ka darbs pie tā vēl nav pabeigts. pulls.no_merge_wip=Šo izmaiņu pieprasījumu nav iespējams sapludināt, jo tas ir atzīmēts, ka darbs pie tā vēl nav pabeigts.
pulls.merge_pull_request=Izmaiņu pieprasījuma sapludināšana pulls.merge_pull_request=Izmaiņu pieprasījuma sapludināšana
pulls.rebase_merge_pull_request=Pārbāzēt un sapludināt pulls.rebase_merge_pull_request=Pārbāzēt un sapludināt
pulls.rebase_merge_commit_pull_request=Pārbāzēt un sapludināt (--no-ff)
pulls.squash_merge_pull_request=Saspiest un sapludināt pulls.squash_merge_pull_request=Saspiest un sapludināt
pulls.invalid_merge_option=Nav iespējams izmantot šādu sapludināšanas veidu šim izmaiņu pieprasījumam. pulls.invalid_merge_option=Nav iespējams izmantot šādu sapludināšanas veidu šim izmaiņu pieprasījumam.
pulls.open_unmerged_pull_exists=`Jūs nevarat veikt atkārtotas atvēršanas darbību, jo jau eksistē izmaiņu pieprasījums (#%d) ar šādu sapludināšanas informāciju.` pulls.open_unmerged_pull_exists=`Jūs nevarat veikt atkārtotas atvēršanas darbību, jo jau eksistē izmaiņu pieprasījums (#%d) ar šādu sapludināšanas informāciju.`
@@ -1012,6 +1014,7 @@ settings.pulls_desc=Iespējot repozitorija izmaiņu pieprasījumus
settings.pulls.ignore_whitespace=Pārbaudot konfliktus, ignorēt izmaiņas atstarpēs settings.pulls.ignore_whitespace=Pārbaudot konfliktus, ignorēt izmaiņas atstarpēs
settings.pulls.allow_merge_commits=Iespējot revīziju sapludināšanu settings.pulls.allow_merge_commits=Iespējot revīziju sapludināšanu
settings.pulls.allow_rebase_merge=Iespējot pārbāzēšanu sapludinot revīzijas settings.pulls.allow_rebase_merge=Iespējot pārbāzēšanu sapludinot revīzijas
settings.pulls.allow_rebase_merge_commit=Iespējot pārbāzēšanu sapludinot revīzijas (--no-ff)
settings.pulls.allow_squash_commits=Iespējot saspiešanu sapludinot revīzijas settings.pulls.allow_squash_commits=Iespējot saspiešanu sapludinot revīzijas
settings.admin_settings=Administratora iestatījumi settings.admin_settings=Administratora iestatījumi
settings.admin_enable_health_check=Iespējot veselības pārbaudi (git fsck) šim repozitorijam settings.admin_enable_health_check=Iespējot veselības pārbaudi (git fsck) šim repozitorijam
@@ -1098,6 +1101,7 @@ settings.event_issue_comment_desc=Problēmas komentārs pievienots, labots vai d
settings.event_release=Laidiens settings.event_release=Laidiens
settings.event_release_desc=Publicēts, atjaunots vai dzēsts laidiens repozitorijā. settings.event_release_desc=Publicēts, atjaunots vai dzēsts laidiens repozitorijā.
settings.event_pull_request=Izmaiņu pieprasījums settings.event_pull_request=Izmaiņu pieprasījums
settings.event_pull_request_desc=Izmaiņu pieprasījums izveidots, slēgts, atkārtoti atvērts, labots, apstiprināts, noraidīts, recenzēts, piešķirts, pievienots vai noņemts atbildīgais, pievienota etiķete, noņemta etiķete, pievienots vai noņemts atskaites punkts.
settings.event_push=Izmaiņu nosūtīšana settings.event_push=Izmaiņu nosūtīšana
settings.event_push_desc=Git izmaiņu nosūtīšana uz repozitoriju. settings.event_push_desc=Git izmaiņu nosūtīšana uz repozitoriju.
settings.event_repository=Repozitorijs settings.event_repository=Repozitorijs
@@ -1148,6 +1152,10 @@ settings.protect_merge_whitelist_committers=Iespējot sapludināšanas ierobežo
settings.protect_merge_whitelist_committers_desc=Atļaut tikai noteiktiem lietotājiem vai komandām sapludināt izmaiņu pieprasījumus šajā atzarā. settings.protect_merge_whitelist_committers_desc=Atļaut tikai noteiktiem lietotājiem vai komandām sapludināt izmaiņu pieprasījumus šajā atzarā.
settings.protect_merge_whitelist_users=Lietotāji, kas var veikt izmaiņu sapludināšanu: settings.protect_merge_whitelist_users=Lietotāji, kas var veikt izmaiņu sapludināšanu:
settings.protect_merge_whitelist_teams=Komandas, kas var veikt izmaiņu sapludināšanu: settings.protect_merge_whitelist_teams=Komandas, kas var veikt izmaiņu sapludināšanu:
settings.protect_required_approvals=Vajadzīgi apstiprinājumi:
settings.protect_required_approvals_desc=Atļaut tikai noteiktiem lietotājiem vai komandām sapludināt izmaiņu pieprasījumu, kam veikts noteikts daudzums pozitīvu recenziju.
settings.protect_approvals_whitelist_users=Lietotāji, kas var veikt recenzijas:
settings.protect_approvals_whitelist_teams=Komandas, kas var veikt recenzijas:
settings.add_protected_branch=Iespējot aizsargāšanu settings.add_protected_branch=Iespējot aizsargāšanu
settings.delete_protected_branch=Atspējot aizsargāšanu settings.delete_protected_branch=Atspējot aizsargāšanu
settings.update_protect_branch_success=Atzara aizsardzība atzaram '%s' tika saglabāta. settings.update_protect_branch_success=Atzara aizsardzība atzaram '%s' tika saglabāta.
@@ -1158,6 +1166,7 @@ settings.default_branch_desc=Norādiet noklusēto repozitorija atzaru izmaiņu p
settings.choose_branch=Izvēlieties atzaru… settings.choose_branch=Izvēlieties atzaru…
settings.no_protected_branch=Nav neviena aizsargātā atzara. settings.no_protected_branch=Nav neviena aizsargātā atzara.
settings.edit_protected_branch=Labot settings.edit_protected_branch=Labot
settings.protected_branch_required_approvals_min=Pieprasīto recenziju skaits nevar būt negatīvs.
diff.browse_source=Pārlūkot izejas kodu diff.browse_source=Pārlūkot izejas kodu
diff.parent=vecāks diff.parent=vecāks

View File

@@ -85,7 +85,7 @@ func sudo() macaron.Handler {
} }
if len(sudo) > 0 { if len(sudo) > 0 {
if ctx.User.IsAdmin { if ctx.IsSigned && ctx.User.IsAdmin {
user, err := models.GetUserByName(sudo) user, err := models.GetUserByName(sudo)
if err != nil { if err != nil {
if models.IsErrUserNotExist(err) { if models.IsErrUserNotExist(err) {

View File

@@ -16,6 +16,30 @@ import (
// GetTree get the tree of a repository. // GetTree get the tree of a repository.
func GetTree(ctx *context.APIContext) { func GetTree(ctx *context.APIContext) {
// swagger:operation GET /repos/{owner}/{repo}/git/trees/{sha} repository GetTree
// ---
// summary: Gets the tree of a repository.
// produces:
// - application/json
// parameters:
// - name: owner
// in: path
// description: owner of the repo
// type: string
// required: true
// - name: repo
// in: path
// description: name of the repo
// type: string
// required: true
// - name: sha
// in: path
// description: sha of the commit
// type: string
// required: true
// responses:
// "200":
// "$ref": "#/responses/GitTreeResponse"
sha := ctx.Params("sha") sha := ctx.Params("sha")
if len(sha) == 0 { if len(sha) == 0 {
ctx.Error(400, "sha not provided", nil) ctx.Error(400, "sha not provided", nil)

View File

@@ -133,3 +133,10 @@ type swaggerResponseAttachment struct {
//in: body //in: body
Body api.Attachment `json:"body"` Body api.Attachment `json:"body"`
} }
// GitTreeResponse
// swagger:response GitTreeResponse
type swaggerGitTreeResponse struct {
//in: body
Body api.GitTreeResponse `json:"body"`
}

View File

@@ -201,7 +201,7 @@ func Diff(ctx *context.Context) {
commitID = commit.ID.String() commitID = commit.ID.String()
} }
statuses, err := models.GetLatestCommitStatus(ctx.Repo.Repository, ctx.Repo.Commit.ID.String(), 0) statuses, err := models.GetLatestCommitStatus(ctx.Repo.Repository, commitID, 0)
if err != nil { if err != nil {
log.Error(3, "GetLatestCommitStatus: %v", err) log.Error(3, "GetLatestCommitStatus: %v", err)
} }

View File

@@ -163,7 +163,11 @@ func editFilePost(ctx *context.Context, form auth.EditRepoFileForm, isNewFile bo
branchName = form.NewBranchName branchName = form.NewBranchName
} }
form.TreePath = strings.Trim(path.Clean("/"+form.TreePath), " /") form.TreePath = cleanUploadFileName(form.TreePath)
if len(form.TreePath) == 0 {
ctx.Error(500, "Upload file name is invalid")
return
}
treeNames, treePaths := getParentTreeFields(form.TreePath) treeNames, treePaths := getParentTreeFields(form.TreePath)
ctx.Data["TreePath"] = form.TreePath ctx.Data["TreePath"] = form.TreePath
@@ -373,6 +377,13 @@ func DeleteFile(ctx *context.Context) {
func DeleteFilePost(ctx *context.Context, form auth.DeleteRepoFileForm) { func DeleteFilePost(ctx *context.Context, form auth.DeleteRepoFileForm) {
ctx.Data["PageIsDelete"] = true ctx.Data["PageIsDelete"] = true
ctx.Data["BranchLink"] = ctx.Repo.RepoLink + "/src/" + ctx.Repo.BranchNameSubURL() ctx.Data["BranchLink"] = ctx.Repo.RepoLink + "/src/" + ctx.Repo.BranchNameSubURL()
ctx.Repo.TreePath = cleanUploadFileName(ctx.Repo.TreePath)
if len(ctx.Repo.TreePath) == 0 {
ctx.Error(500, "Delete file name is invalid")
return
}
ctx.Data["TreePath"] = ctx.Repo.TreePath ctx.Data["TreePath"] = ctx.Repo.TreePath
canCommit := renderCommitRights(ctx) canCommit := renderCommitRights(ctx)
@@ -477,7 +488,12 @@ func UploadFilePost(ctx *context.Context, form auth.UploadRepoFileForm) {
branchName = form.NewBranchName branchName = form.NewBranchName
} }
form.TreePath = strings.Trim(path.Clean("/"+form.TreePath), " /") form.TreePath = cleanUploadFileName(form.TreePath)
if len(form.TreePath) == 0 {
ctx.Error(500, "Upload file name is invalid")
return
}
treeNames, treePaths := getParentTreeFields(form.TreePath) treeNames, treePaths := getParentTreeFields(form.TreePath)
if len(treeNames) == 0 { if len(treeNames) == 0 {
// We must at least have one element for user to input. // We must at least have one element for user to input.

View File

@@ -355,7 +355,7 @@ func setTemplateIfExists(ctx *context.Context, ctxDataKey string, possibleFiles
} }
} }
// NewIssue render createing issue page // NewIssue render creating issue page
func NewIssue(ctx *context.Context) { func NewIssue(ctx *context.Context) {
ctx.Data["Title"] = ctx.Tr("repo.issues.new") ctx.Data["Title"] = ctx.Tr("repo.issues.new")
ctx.Data["PageIsIssueList"] = true ctx.Data["PageIsIssueList"] = true
@@ -494,6 +494,11 @@ func NewIssuePost(ctx *context.Context, form auth.CreateIssueForm) {
return return
} }
if util.IsEmptyString(form.Title) {
ctx.RenderWithErr(ctx.Tr("repo.issues.new.title_empty"), tplIssueNew, form)
return
}
issue := &models.Issue{ issue := &models.Issue{
RepoID: repo.ID, RepoID: repo.ID,
Title: form.Title, Title: form.Title,

View File

@@ -22,6 +22,7 @@ import (
"code.gitea.io/gitea/modules/log" "code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/notification" "code.gitea.io/gitea/modules/notification"
"code.gitea.io/gitea/modules/setting" "code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/util"
"github.com/Unknwon/com" "github.com/Unknwon/com"
) )
@@ -860,6 +861,16 @@ func CompareAndPullRequestPost(ctx *context.Context, form auth.CreateIssueForm)
return return
} }
if util.IsEmptyString(form.Title) {
PrepareCompareDiff(ctx, headUser, headRepo, headGitRepo, prInfo, baseBranch, headBranch)
if ctx.Written() {
return
}
ctx.RenderWithErr(ctx.Tr("repo.issues.new.title_empty"), tplComparePull, form)
return
}
patch, err := headGitRepo.GetPatch(prInfo.MergeBase, headBranch) patch, err := headGitRepo.GetPatch(prInfo.MergeBase, headBranch)
if err != nil { if err != nil {
ctx.ServerError("GetPatch", err) ctx.ServerError("GetPatch", err)

View File

@@ -341,6 +341,11 @@ func NewWikiPost(ctx *context.Context, form auth.NewWikiForm) {
return return
} }
if util.IsEmptyString(form.Title) {
ctx.RenderWithErr(ctx.Tr("repo.issues.new.title_empty"), tplWikiNew, form)
return
}
wikiName := models.NormalizeWikiName(form.Title) wikiName := models.NormalizeWikiName(form.Title)
if err := ctx.Repo.Repository.AddWikiPage(ctx.User, wikiName, form.Content, form.Message); err != nil { if err := ctx.Repo.Repository.AddWikiPage(ctx.User, wikiName, form.Content, form.Message); err != nil {
if models.IsErrWikiReservedName(err) { if models.IsErrWikiReservedName(err) {

View File

@@ -106,7 +106,7 @@ func NewMacaron() *macaron.Macaron {
Langs: setting.Langs, Langs: setting.Langs,
Names: setting.Names, Names: setting.Names,
DefaultLang: "en-US", DefaultLang: "en-US",
Redirect: true, Redirect: false,
})) }))
m.Use(cache.Cacher(cache.Options{ m.Use(cache.Cacher(cache.Options{
Adapter: setting.CacheService.Adapter, Adapter: setting.CacheService.Adapter,
@@ -643,7 +643,7 @@ func RegisterRoutes(m *macaron.Macaron) {
} }
ctx.Data["CommitsCount"] = ctx.Repo.CommitsCount ctx.Data["CommitsCount"] = ctx.Repo.CommitsCount
}) })
}, context.RepoAssignment(), context.UnitTypes(), reqRepoReleaseReader) }, ignSignIn, context.RepoAssignment(), context.UnitTypes(), reqRepoReleaseReader)
m.Group("/:username/:reponame", func() { m.Group("/:username/:reponame", func() {
m.Post("/topics", repo.TopicsPost) m.Post("/topics", repo.TopicsPost)

View File

@@ -115,7 +115,8 @@ func SignInOpenIDPost(ctx *context.Context, form auth.SignInOpenIDForm) {
redirectTo := setting.AppURL + "user/login/openid" redirectTo := setting.AppURL + "user/login/openid"
url, err := openid.RedirectURL(id, redirectTo, setting.AppURL) url, err := openid.RedirectURL(id, redirectTo, setting.AppURL)
if err != nil { if err != nil {
ctx.RenderWithErr(err.Error(), tplSignInOpenID, &form) log.Error(1, "Error in OpenID redirect URL: %s, %v", redirectTo, err.Error())
ctx.RenderWithErr(fmt.Sprintf("Unable to find OpenID provider in %s", redirectTo), tplSignInOpenID, &form)
return return
} }

View File

@@ -100,7 +100,7 @@
<dt>{{.i18n.Tr "admin.dashboard.mspan_structures_usage"}}</dt> <dt>{{.i18n.Tr "admin.dashboard.mspan_structures_usage"}}</dt>
<dd>{{.SysStatus.MSpanInuse}}</dd> <dd>{{.SysStatus.MSpanInuse}}</dd>
<dt>{{.i18n.Tr "admin.dashboard.mspan_structures_obtained"}}</dt> <dt>{{.i18n.Tr "admin.dashboard.mspan_structures_obtained"}}</dt>
<dd>{{.SysStatus.HeapSys}}</dd> <dd>{{.SysStatus.MSpanSys}}</dd>
<dt>{{.i18n.Tr "admin.dashboard.mcache_structures_usage"}}</dt> <dt>{{.i18n.Tr "admin.dashboard.mcache_structures_usage"}}</dt>
<dd>{{.SysStatus.MCacheInuse}}</dd> <dd>{{.SysStatus.MCacheInuse}}</dd>
<dt>{{.i18n.Tr "admin.dashboard.mcache_structures_obtained"}}</dt> <dt>{{.i18n.Tr "admin.dashboard.mcache_structures_obtained"}}</dt>

View File

@@ -54,8 +54,8 @@
<div class="ui stackable secondary menu mobile--margin-between-items mobile--no-negative-margins"> <div class="ui stackable secondary menu mobile--margin-between-items mobile--no-negative-margins">
{{if and .PullRequestCtx.Allowed .IsViewBranch}} {{if and .PullRequestCtx.Allowed .IsViewBranch}}
<div class="fitted item"> <div class="fitted item">
<a href="{{.BaseRepo.Link}}/compare/{{.BaseRepo.DefaultBranch | EscapePound}}...{{.Repository.Owner.Name}}:{{.BranchName | EscapePound}}"> <a href="{{.BaseRepo.Link}}/compare/{{.BaseRepo.DefaultBranch | EscapePound}}...{{ if .Repository.IsFork }}{{.Repository.Owner.Name}}{{ else }}{{ .SignedUserName }}{{ end }}:{{.BranchName | EscapePound}}">
<button class="ui green tiny compact button"><i class="octicon octicon-git-compare"></i></button> <button class="ui green tiny compact button"><i class="octicon octicon-git-compare"></i></button>
</a> </a>
</div> </div>
{{end}} {{end}}

View File

@@ -1663,6 +1663,46 @@
} }
} }
}, },
"/repos/{owner}/{repo}/git/trees/{sha}": {
"get": {
"produces": [
"application/json"
],
"tags": [
"repository"
],
"summary": "Gets the tree of a repository.",
"operationId": "GetTree",
"parameters": [
{
"type": "string",
"description": "owner of the repo",
"name": "owner",
"in": "path",
"required": true
},
{
"type": "string",
"description": "name of the repo",
"name": "repo",
"in": "path",
"required": true
},
{
"type": "string",
"description": "sha of the commit",
"name": "sha",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"$ref": "#/responses/GitTreeResponse"
}
}
}
},
"/repos/{owner}/{repo}/hooks": { "/repos/{owner}/{repo}/hooks": {
"get": { "get": {
"produces": [ "produces": [
@@ -7040,6 +7080,38 @@
}, },
"x-go-package": "code.gitea.io/gitea/vendor/code.gitea.io/sdk/gitea" "x-go-package": "code.gitea.io/gitea/vendor/code.gitea.io/sdk/gitea"
}, },
"GitEntry": {
"description": "GitEntry represents a git tree",
"type": "object",
"properties": {
"mode": {
"type": "string",
"x-go-name": "Mode"
},
"path": {
"type": "string",
"x-go-name": "Path"
},
"sha": {
"type": "string",
"x-go-name": "SHA"
},
"size": {
"type": "integer",
"format": "int64",
"x-go-name": "Size"
},
"type": {
"type": "string",
"x-go-name": "Type"
},
"url": {
"type": "string",
"x-go-name": "URL"
}
},
"x-go-package": "code.gitea.io/gitea/vendor/code.gitea.io/sdk/gitea"
},
"GitObject": { "GitObject": {
"type": "object", "type": "object",
"title": "GitObject represents a Git object.", "title": "GitObject represents a Git object.",
@@ -7059,6 +7131,32 @@
}, },
"x-go-package": "code.gitea.io/gitea/vendor/code.gitea.io/sdk/gitea" "x-go-package": "code.gitea.io/gitea/vendor/code.gitea.io/sdk/gitea"
}, },
"GitTreeResponse": {
"description": "GitTreeResponse returns a git tree",
"type": "object",
"properties": {
"sha": {
"type": "string",
"x-go-name": "SHA"
},
"tree": {
"type": "array",
"items": {
"$ref": "#/definitions/GitEntry"
},
"x-go-name": "Entries"
},
"truncated": {
"type": "boolean",
"x-go-name": "Truncated"
},
"url": {
"type": "string",
"x-go-name": "URL"
}
},
"x-go-package": "code.gitea.io/gitea/vendor/code.gitea.io/sdk/gitea"
},
"Issue": { "Issue": {
"description": "Issue represents an issue in a repository", "description": "Issue represents an issue in a repository",
"type": "object", "type": "object",
@@ -8200,6 +8298,12 @@
} }
} }
}, },
"GitTreeResponse": {
"description": "GitTreeResponse",
"schema": {
"$ref": "#/definitions/GitTreeResponse"
}
},
"Hook": { "Hook": {
"description": "Hook", "description": "Hook",
"schema": { "schema": {

View File

@@ -822,7 +822,7 @@ func (db *postgres) SqlType(c *core.Column) string {
case core.NVarchar: case core.NVarchar:
res = core.Varchar res = core.Varchar
case core.Uuid: case core.Uuid:
res = core.Uuid return core.Uuid
case core.Blob, core.TinyBlob, core.MediumBlob, core.LongBlob: case core.Blob, core.TinyBlob, core.MediumBlob, core.LongBlob:
return core.Bytea return core.Bytea
case core.Double: case core.Double:
@@ -834,6 +834,10 @@ func (db *postgres) SqlType(c *core.Column) string {
res = t res = t
} }
if strings.EqualFold(res, "bool") {
// for bool, we don't need length information
return res
}
hasLen1 := (c.Length > 0) hasLen1 := (c.Length > 0)
hasLen2 := (c.Length2 > 0) hasLen2 := (c.Length2 > 0)

View File

@@ -481,7 +481,8 @@ func (engine *Engine) dumpTables(tables []*core.Table, w io.Writer, tp ...core.D
} }
cols := table.ColumnsSeq() cols := table.ColumnsSeq()
colNames := dialect.Quote(strings.Join(cols, dialect.Quote(", "))) colNames := engine.dialect.Quote(strings.Join(cols, engine.dialect.Quote(", ")))
destColNames := dialect.Quote(strings.Join(cols, dialect.Quote(", ")))
rows, err := engine.DB().Query("SELECT " + colNames + " FROM " + engine.Quote(table.Name)) rows, err := engine.DB().Query("SELECT " + colNames + " FROM " + engine.Quote(table.Name))
if err != nil { if err != nil {
@@ -496,7 +497,7 @@ func (engine *Engine) dumpTables(tables []*core.Table, w io.Writer, tp ...core.D
return err return err
} }
_, err = io.WriteString(w, "INSERT INTO "+dialect.Quote(table.Name)+" ("+colNames+") VALUES (") _, err = io.WriteString(w, "INSERT INTO "+dialect.Quote(table.Name)+" ("+destColNames+") VALUES (")
if err != nil { if err != nil {
return err return err
} }

43
vendor/gopkg.in/ldap.v2/LICENSE generated vendored
View File

@@ -1,27 +1,22 @@
Copyright (c) 2012 The Go Authors. All rights reserved. The MIT License (MIT)
Redistribution and use in source and binary forms, with or without Copyright (c) 2011-2015 Michael Mitton (mmitton@gmail.com)
modification, are permitted provided that the following conditions are Portions copyright (c) 2015-2016 go-ldap Authors
met:
* Redistributions of source code must retain the above copyright Permission is hereby granted, free of charge, to any person obtaining a copy
notice, this list of conditions and the following disclaimer. of this software and associated documentation files (the "Software"), to deal
* Redistributions in binary form must reproduce the above in the Software without restriction, including without limitation the rights
copyright notice, this list of conditions and the following disclaimer to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
in the documentation and/or other materials provided with the copies of the Software, and to permit persons to whom the Software is
distribution. furnished to do so, subject to the following conditions:
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS The above copyright notice and this permission notice shall be included in all
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT copies or substantial portions of the Software.
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE SOFTWARE.
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

13
vendor/gopkg.in/ldap.v2/atomic_value.go generated vendored Normal file
View File

@@ -0,0 +1,13 @@
// +build go1.4
package ldap
import (
"sync/atomic"
)
// For compilers that support it, we just use the underlying sync/atomic.Value
// type.
type atomicValue struct {
atomic.Value
}

28
vendor/gopkg.in/ldap.v2/atomic_value_go13.go generated vendored Normal file
View File

@@ -0,0 +1,28 @@
// +build !go1.4
package ldap
import (
"sync"
)
// This is a helper type that emulates the use of the "sync/atomic.Value"
// struct that's available in Go 1.4 and up.
type atomicValue struct {
value interface{}
lock sync.RWMutex
}
func (av *atomicValue) Store(val interface{}) {
av.lock.Lock()
av.value = val
av.lock.Unlock()
}
func (av *atomicValue) Load() interface{} {
av.lock.RLock()
ret := av.value
av.lock.RUnlock()
return ret
}

73
vendor/gopkg.in/ldap.v2/conn.go generated vendored
View File

@@ -11,6 +11,7 @@ import (
"log" "log"
"net" "net"
"sync" "sync"
"sync/atomic"
"time" "time"
"gopkg.in/asn1-ber.v1" "gopkg.in/asn1-ber.v1"
@@ -82,20 +83,18 @@ const (
type Conn struct { type Conn struct {
conn net.Conn conn net.Conn
isTLS bool isTLS bool
isClosing bool closing uint32
closeErr error closeErr atomicValue
isStartingTLS bool isStartingTLS bool
Debug debugging Debug debugging
chanConfirm chan bool chanConfirm chan struct{}
messageContexts map[int64]*messageContext messageContexts map[int64]*messageContext
chanMessage chan *messagePacket chanMessage chan *messagePacket
chanMessageID chan int64 chanMessageID chan int64
wgSender sync.WaitGroup
wgClose sync.WaitGroup wgClose sync.WaitGroup
once sync.Once
outstandingRequests uint outstandingRequests uint
messageMutex sync.Mutex messageMutex sync.Mutex
requestTimeout time.Duration requestTimeout int64
} }
var _ Client = &Conn{} var _ Client = &Conn{}
@@ -142,7 +141,7 @@ func DialTLS(network, addr string, config *tls.Config) (*Conn, error) {
func NewConn(conn net.Conn, isTLS bool) *Conn { func NewConn(conn net.Conn, isTLS bool) *Conn {
return &Conn{ return &Conn{
conn: conn, conn: conn,
chanConfirm: make(chan bool), chanConfirm: make(chan struct{}),
chanMessageID: make(chan int64), chanMessageID: make(chan int64),
chanMessage: make(chan *messagePacket, 10), chanMessage: make(chan *messagePacket, 10),
messageContexts: map[int64]*messageContext{}, messageContexts: map[int64]*messageContext{},
@@ -158,12 +157,22 @@ func (l *Conn) Start() {
l.wgClose.Add(1) l.wgClose.Add(1)
} }
// isClosing returns whether or not we're currently closing.
func (l *Conn) isClosing() bool {
return atomic.LoadUint32(&l.closing) == 1
}
// setClosing sets the closing value to true
func (l *Conn) setClosing() bool {
return atomic.CompareAndSwapUint32(&l.closing, 0, 1)
}
// Close closes the connection. // Close closes the connection.
func (l *Conn) Close() { func (l *Conn) Close() {
l.once.Do(func() { l.messageMutex.Lock()
l.isClosing = true defer l.messageMutex.Unlock()
l.wgSender.Wait()
if l.setClosing() {
l.Debug.Printf("Sending quit message and waiting for confirmation") l.Debug.Printf("Sending quit message and waiting for confirmation")
l.chanMessage <- &messagePacket{Op: MessageQuit} l.chanMessage <- &messagePacket{Op: MessageQuit}
<-l.chanConfirm <-l.chanConfirm
@@ -171,27 +180,25 @@ func (l *Conn) Close() {
l.Debug.Printf("Closing network connection") l.Debug.Printf("Closing network connection")
if err := l.conn.Close(); err != nil { if err := l.conn.Close(); err != nil {
log.Print(err) log.Println(err)
} }
l.wgClose.Done() l.wgClose.Done()
}) }
l.wgClose.Wait() l.wgClose.Wait()
} }
// SetTimeout sets the time after a request is sent that a MessageTimeout triggers // SetTimeout sets the time after a request is sent that a MessageTimeout triggers
func (l *Conn) SetTimeout(timeout time.Duration) { func (l *Conn) SetTimeout(timeout time.Duration) {
if timeout > 0 { if timeout > 0 {
l.requestTimeout = timeout atomic.StoreInt64(&l.requestTimeout, int64(timeout))
} }
} }
// Returns the next available messageID // Returns the next available messageID
func (l *Conn) nextMessageID() int64 { func (l *Conn) nextMessageID() int64 {
if l.chanMessageID != nil { if messageID, ok := <-l.chanMessageID; ok {
if messageID, ok := <-l.chanMessageID; ok { return messageID
return messageID
}
} }
return 0 return 0
} }
@@ -258,7 +265,7 @@ func (l *Conn) sendMessage(packet *ber.Packet) (*messageContext, error) {
} }
func (l *Conn) sendMessageWithFlags(packet *ber.Packet, flags sendMessageFlags) (*messageContext, error) { func (l *Conn) sendMessageWithFlags(packet *ber.Packet, flags sendMessageFlags) (*messageContext, error) {
if l.isClosing { if l.isClosing() {
return nil, NewError(ErrorNetwork, errors.New("ldap: connection closed")) return nil, NewError(ErrorNetwork, errors.New("ldap: connection closed"))
} }
l.messageMutex.Lock() l.messageMutex.Lock()
@@ -297,7 +304,7 @@ func (l *Conn) sendMessageWithFlags(packet *ber.Packet, flags sendMessageFlags)
func (l *Conn) finishMessage(msgCtx *messageContext) { func (l *Conn) finishMessage(msgCtx *messageContext) {
close(msgCtx.done) close(msgCtx.done)
if l.isClosing { if l.isClosing() {
return return
} }
@@ -316,12 +323,12 @@ func (l *Conn) finishMessage(msgCtx *messageContext) {
} }
func (l *Conn) sendProcessMessage(message *messagePacket) bool { func (l *Conn) sendProcessMessage(message *messagePacket) bool {
if l.isClosing { l.messageMutex.Lock()
defer l.messageMutex.Unlock()
if l.isClosing() {
return false return false
} }
l.wgSender.Add(1)
l.chanMessage <- message l.chanMessage <- message
l.wgSender.Done()
return true return true
} }
@@ -333,15 +340,14 @@ func (l *Conn) processMessages() {
for messageID, msgCtx := range l.messageContexts { for messageID, msgCtx := range l.messageContexts {
// If we are closing due to an error, inform anyone who // If we are closing due to an error, inform anyone who
// is waiting about the error. // is waiting about the error.
if l.isClosing && l.closeErr != nil { if l.isClosing() && l.closeErr.Load() != nil {
msgCtx.sendResponse(&PacketResponse{Error: l.closeErr}) msgCtx.sendResponse(&PacketResponse{Error: l.closeErr.Load().(error)})
} }
l.Debug.Printf("Closing channel for MessageID %d", messageID) l.Debug.Printf("Closing channel for MessageID %d", messageID)
close(msgCtx.responses) close(msgCtx.responses)
delete(l.messageContexts, messageID) delete(l.messageContexts, messageID)
} }
close(l.chanMessageID) close(l.chanMessageID)
l.chanConfirm <- true
close(l.chanConfirm) close(l.chanConfirm)
}() }()
@@ -350,11 +356,7 @@ func (l *Conn) processMessages() {
select { select {
case l.chanMessageID <- messageID: case l.chanMessageID <- messageID:
messageID++ messageID++
case message, ok := <-l.chanMessage: case message := <-l.chanMessage:
if !ok {
l.Debug.Printf("Shutting down - message channel is closed")
return
}
switch message.Op { switch message.Op {
case MessageQuit: case MessageQuit:
l.Debug.Printf("Shutting down - quit message received") l.Debug.Printf("Shutting down - quit message received")
@@ -377,14 +379,15 @@ func (l *Conn) processMessages() {
l.messageContexts[message.MessageID] = message.Context l.messageContexts[message.MessageID] = message.Context
// Add timeout if defined // Add timeout if defined
if l.requestTimeout > 0 { requestTimeout := time.Duration(atomic.LoadInt64(&l.requestTimeout))
if requestTimeout > 0 {
go func() { go func() {
defer func() { defer func() {
if err := recover(); err != nil { if err := recover(); err != nil {
log.Printf("ldap: recovered panic in RequestTimeout: %v", err) log.Printf("ldap: recovered panic in RequestTimeout: %v", err)
} }
}() }()
time.Sleep(l.requestTimeout) time.Sleep(requestTimeout)
timeoutMessage := &messagePacket{ timeoutMessage := &messagePacket{
Op: MessageTimeout, Op: MessageTimeout,
MessageID: message.MessageID, MessageID: message.MessageID,
@@ -397,7 +400,7 @@ func (l *Conn) processMessages() {
if msgCtx, ok := l.messageContexts[message.MessageID]; ok { if msgCtx, ok := l.messageContexts[message.MessageID]; ok {
msgCtx.sendResponse(&PacketResponse{message.Packet, nil}) msgCtx.sendResponse(&PacketResponse{message.Packet, nil})
} else { } else {
log.Printf("Received unexpected message %d, %v", message.MessageID, l.isClosing) log.Printf("Received unexpected message %d, %v", message.MessageID, l.isClosing())
ber.PrintPacket(message.Packet) ber.PrintPacket(message.Packet)
} }
case MessageTimeout: case MessageTimeout:
@@ -439,8 +442,8 @@ func (l *Conn) reader() {
packet, err := ber.ReadPacket(l.conn) packet, err := ber.ReadPacket(l.conn)
if err != nil { if err != nil {
// A read error is expected here if we are closing the connection... // A read error is expected here if we are closing the connection...
if !l.isClosing { if !l.isClosing() {
l.closeErr = fmt.Errorf("unable to read LDAP response packet: %s", err) l.closeErr.Store(fmt.Errorf("unable to read LDAP response packet: %s", err))
l.Debug.Printf("reader error: %s", err.Error()) l.Debug.Printf("reader error: %s", err.Error())
} }
return return

12
vendor/gopkg.in/ldap.v2/control.go generated vendored
View File

@@ -334,18 +334,18 @@ func DecodeControl(packet *ber.Packet) Control {
for _, child := range sequence.Children { for _, child := range sequence.Children {
if child.Tag == 0 { if child.Tag == 0 {
//Warning //Warning
child := child.Children[0] warningPacket := child.Children[0]
packet := ber.DecodePacket(child.Data.Bytes()) packet := ber.DecodePacket(warningPacket.Data.Bytes())
val, ok := packet.Value.(int64) val, ok := packet.Value.(int64)
if ok { if ok {
if child.Tag == 0 { if warningPacket.Tag == 0 {
//timeBeforeExpiration //timeBeforeExpiration
c.Expire = val c.Expire = val
child.Value = c.Expire warningPacket.Value = c.Expire
} else if child.Tag == 1 { } else if warningPacket.Tag == 1 {
//graceAuthNsRemaining //graceAuthNsRemaining
c.Grace = val c.Grace = val
child.Value = c.Grace warningPacket.Value = c.Grace
} }
} }
} else if child.Tag == 1 { } else if child.Tag == 1 {

2
vendor/gopkg.in/ldap.v2/debug.go generated vendored
View File

@@ -6,7 +6,7 @@ import (
"gopkg.in/asn1-ber.v1" "gopkg.in/asn1-ber.v1"
) )
// debbuging type // debugging type
// - has a Printf method to write the debug output // - has a Printf method to write the debug output
type debugging bool type debugging bool

103
vendor/gopkg.in/ldap.v2/dn.go generated vendored
View File

@@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// //
// File contains DN parsing functionallity // File contains DN parsing functionality
// //
// https://tools.ietf.org/html/rfc4514 // https://tools.ietf.org/html/rfc4514
// //
@@ -52,7 +52,7 @@ import (
"fmt" "fmt"
"strings" "strings"
ber "gopkg.in/asn1-ber.v1" "gopkg.in/asn1-ber.v1"
) )
// AttributeTypeAndValue represents an attributeTypeAndValue from https://tools.ietf.org/html/rfc4514 // AttributeTypeAndValue represents an attributeTypeAndValue from https://tools.ietf.org/html/rfc4514
@@ -83,9 +83,19 @@ func ParseDN(str string) (*DN, error) {
attribute := new(AttributeTypeAndValue) attribute := new(AttributeTypeAndValue)
escaping := false escaping := false
unescapedTrailingSpaces := 0
stringFromBuffer := func() string {
s := buffer.String()
s = s[0 : len(s)-unescapedTrailingSpaces]
buffer.Reset()
unescapedTrailingSpaces = 0
return s
}
for i := 0; i < len(str); i++ { for i := 0; i < len(str); i++ {
char := str[i] char := str[i]
if escaping { if escaping {
unescapedTrailingSpaces = 0
escaping = false escaping = false
switch char { switch char {
case ' ', '"', '#', '+', ',', ';', '<', '=', '>', '\\': case ' ', '"', '#', '+', ',', ';', '<', '=', '>', '\\':
@@ -107,10 +117,10 @@ func ParseDN(str string) (*DN, error) {
buffer.WriteByte(dst[0]) buffer.WriteByte(dst[0])
i++ i++
} else if char == '\\' { } else if char == '\\' {
unescapedTrailingSpaces = 0
escaping = true escaping = true
} else if char == '=' { } else if char == '=' {
attribute.Type = buffer.String() attribute.Type = stringFromBuffer()
buffer.Reset()
// Special case: If the first character in the value is # the // Special case: If the first character in the value is # the
// following data is BER encoded so we can just fast forward // following data is BER encoded so we can just fast forward
// and decode. // and decode.
@@ -133,7 +143,10 @@ func ParseDN(str string) (*DN, error) {
} }
} else if char == ',' || char == '+' { } else if char == ',' || char == '+' {
// We're done with this RDN or value, push it // We're done with this RDN or value, push it
attribute.Value = buffer.String() if len(attribute.Type) == 0 {
return nil, errors.New("incomplete type, value pair")
}
attribute.Value = stringFromBuffer()
rdn.Attributes = append(rdn.Attributes, attribute) rdn.Attributes = append(rdn.Attributes, attribute)
attribute = new(AttributeTypeAndValue) attribute = new(AttributeTypeAndValue)
if char == ',' { if char == ',' {
@@ -141,8 +154,17 @@ func ParseDN(str string) (*DN, error) {
rdn = new(RelativeDN) rdn = new(RelativeDN)
rdn.Attributes = make([]*AttributeTypeAndValue, 0) rdn.Attributes = make([]*AttributeTypeAndValue, 0)
} }
buffer.Reset() } else if char == ' ' && buffer.Len() == 0 {
// ignore unescaped leading spaces
continue
} else { } else {
if char == ' ' {
// Track unescaped spaces in case they are trailing and we need to remove them
unescapedTrailingSpaces++
} else {
// Reset if we see a non-space char
unescapedTrailingSpaces = 0
}
buffer.WriteByte(char) buffer.WriteByte(char)
} }
} }
@@ -150,9 +172,76 @@ func ParseDN(str string) (*DN, error) {
if len(attribute.Type) == 0 { if len(attribute.Type) == 0 {
return nil, errors.New("DN ended with incomplete type, value pair") return nil, errors.New("DN ended with incomplete type, value pair")
} }
attribute.Value = buffer.String() attribute.Value = stringFromBuffer()
rdn.Attributes = append(rdn.Attributes, attribute) rdn.Attributes = append(rdn.Attributes, attribute)
dn.RDNs = append(dn.RDNs, rdn) dn.RDNs = append(dn.RDNs, rdn)
} }
return dn, nil return dn, nil
} }
// Equal returns true if the DNs are equal as defined by rfc4517 4.2.15 (distinguishedNameMatch).
// Returns true if they have the same number of relative distinguished names
// and corresponding relative distinguished names (by position) are the same.
func (d *DN) Equal(other *DN) bool {
if len(d.RDNs) != len(other.RDNs) {
return false
}
for i := range d.RDNs {
if !d.RDNs[i].Equal(other.RDNs[i]) {
return false
}
}
return true
}
// AncestorOf returns true if the other DN consists of at least one RDN followed by all the RDNs of the current DN.
// "ou=widgets,o=acme.com" is an ancestor of "ou=sprockets,ou=widgets,o=acme.com"
// "ou=widgets,o=acme.com" is not an ancestor of "ou=sprockets,ou=widgets,o=foo.com"
// "ou=widgets,o=acme.com" is not an ancestor of "ou=widgets,o=acme.com"
func (d *DN) AncestorOf(other *DN) bool {
if len(d.RDNs) >= len(other.RDNs) {
return false
}
// Take the last `len(d.RDNs)` RDNs from the other DN to compare against
otherRDNs := other.RDNs[len(other.RDNs)-len(d.RDNs):]
for i := range d.RDNs {
if !d.RDNs[i].Equal(otherRDNs[i]) {
return false
}
}
return true
}
// Equal returns true if the RelativeDNs are equal as defined by rfc4517 4.2.15 (distinguishedNameMatch).
// Relative distinguished names are the same if and only if they have the same number of AttributeTypeAndValues
// and each attribute of the first RDN is the same as the attribute of the second RDN with the same attribute type.
// The order of attributes is not significant.
// Case of attribute types is not significant.
func (r *RelativeDN) Equal(other *RelativeDN) bool {
if len(r.Attributes) != len(other.Attributes) {
return false
}
return r.hasAllAttributes(other.Attributes) && other.hasAllAttributes(r.Attributes)
}
func (r *RelativeDN) hasAllAttributes(attrs []*AttributeTypeAndValue) bool {
for _, attr := range attrs {
found := false
for _, myattr := range r.Attributes {
if myattr.Equal(attr) {
found = true
break
}
}
if !found {
return false
}
}
return true
}
// Equal returns true if the AttributeTypeAndValue is equivalent to the specified AttributeTypeAndValue
// Case of the attribute type is not significant
func (a *AttributeTypeAndValue) Equal(other *AttributeTypeAndValue) bool {
return strings.EqualFold(a.Type, other.Type) && a.Value == other.Value
}

7
vendor/gopkg.in/ldap.v2/error.go generated vendored
View File

@@ -97,6 +97,13 @@ var LDAPResultCodeMap = map[uint8]string{
LDAPResultObjectClassModsProhibited: "Object Class Mods Prohibited", LDAPResultObjectClassModsProhibited: "Object Class Mods Prohibited",
LDAPResultAffectsMultipleDSAs: "Affects Multiple DSAs", LDAPResultAffectsMultipleDSAs: "Affects Multiple DSAs",
LDAPResultOther: "Other", LDAPResultOther: "Other",
ErrorNetwork: "Network Error",
ErrorFilterCompile: "Filter Compile Error",
ErrorFilterDecompile: "Filter Decompile Error",
ErrorDebugging: "Debugging Error",
ErrorUnexpectedMessage: "Unexpected Message",
ErrorUnexpectedResponse: "Unexpected Response",
} }
func getLDAPResultCode(packet *ber.Packet) (code uint8, description string) { func getLDAPResultCode(packet *ber.Packet) (code uint8, description string) {

5
vendor/gopkg.in/ldap.v2/filter.go generated vendored
View File

@@ -82,7 +82,10 @@ func CompileFilter(filter string) (*ber.Packet, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
if pos != len(filter) { switch {
case pos > len(filter):
return nil, NewError(ErrorFilterCompile, errors.New("ldap: unexpected end of filter"))
case pos < len(filter):
return nil, NewError(ErrorFilterCompile, errors.New("ldap: finished compiling filter with extra at end: "+fmt.Sprint(filter[pos:]))) return nil, NewError(ErrorFilterCompile, errors.New("ldap: finished compiling filter with extra at end: "+fmt.Sprint(filter[pos:])))
} }
return packet, nil return packet, nil

61
vendor/gopkg.in/ldap.v2/ldap.go generated vendored
View File

@@ -9,7 +9,7 @@ import (
"io/ioutil" "io/ioutil"
"os" "os"
ber "gopkg.in/asn1-ber.v1" "gopkg.in/asn1-ber.v1"
) )
// LDAP Application Codes // LDAP Application Codes
@@ -153,16 +153,47 @@ func addLDAPDescriptions(packet *ber.Packet) (err error) {
func addControlDescriptions(packet *ber.Packet) { func addControlDescriptions(packet *ber.Packet) {
packet.Description = "Controls" packet.Description = "Controls"
for _, child := range packet.Children { for _, child := range packet.Children {
var value *ber.Packet
controlType := ""
child.Description = "Control" child.Description = "Control"
child.Children[0].Description = "Control Type (" + ControlTypeMap[child.Children[0].Value.(string)] + ")" switch len(child.Children) {
value := child.Children[1] case 0:
if len(child.Children) == 3 { // at least one child is required for control type
child.Children[1].Description = "Criticality" continue
value = child.Children[2]
}
value.Description = "Control Value"
switch child.Children[0].Value.(string) { case 1:
// just type, no criticality or value
controlType = child.Children[0].Value.(string)
child.Children[0].Description = "Control Type (" + ControlTypeMap[controlType] + ")"
case 2:
controlType = child.Children[0].Value.(string)
child.Children[0].Description = "Control Type (" + ControlTypeMap[controlType] + ")"
// Children[1] could be criticality or value (both are optional)
// duck-type on whether this is a boolean
if _, ok := child.Children[1].Value.(bool); ok {
child.Children[1].Description = "Criticality"
} else {
child.Children[1].Description = "Control Value"
value = child.Children[1]
}
case 3:
// criticality and value present
controlType = child.Children[0].Value.(string)
child.Children[0].Description = "Control Type (" + ControlTypeMap[controlType] + ")"
child.Children[1].Description = "Criticality"
child.Children[2].Description = "Control Value"
value = child.Children[2]
default:
// more than 3 children is invalid
continue
}
if value == nil {
continue
}
switch controlType {
case ControlTypePaging: case ControlTypePaging:
value.Description += " (Paging)" value.Description += " (Paging)"
if value.Value != nil { if value.Value != nil {
@@ -188,18 +219,18 @@ func addControlDescriptions(packet *ber.Packet) {
for _, child := range sequence.Children { for _, child := range sequence.Children {
if child.Tag == 0 { if child.Tag == 0 {
//Warning //Warning
child := child.Children[0] warningPacket := child.Children[0]
packet := ber.DecodePacket(child.Data.Bytes()) packet := ber.DecodePacket(warningPacket.Data.Bytes())
val, ok := packet.Value.(int64) val, ok := packet.Value.(int64)
if ok { if ok {
if child.Tag == 0 { if warningPacket.Tag == 0 {
//timeBeforeExpiration //timeBeforeExpiration
value.Description += " (TimeBeforeExpiration)" value.Description += " (TimeBeforeExpiration)"
child.Value = val warningPacket.Value = val
} else if child.Tag == 1 { } else if warningPacket.Tag == 1 {
//graceAuthNsRemaining //graceAuthNsRemaining
value.Description += " (GraceAuthNsRemaining)" value.Description += " (GraceAuthNsRemaining)"
child.Value = val warningPacket.Value = val
} }
} }
} else if child.Tag == 1 { } else if child.Tag == 1 {

View File

@@ -135,10 +135,10 @@ func (l *Conn) PasswordModify(passwordModifyRequest *PasswordModifyRequest) (*Pa
extendedResponse := packet.Children[1] extendedResponse := packet.Children[1]
for _, child := range extendedResponse.Children { for _, child := range extendedResponse.Children {
if child.Tag == 11 { if child.Tag == 11 {
passwordModifyReponseValue := ber.DecodePacket(child.Data.Bytes()) passwordModifyResponseValue := ber.DecodePacket(child.Data.Bytes())
if len(passwordModifyReponseValue.Children) == 1 { if len(passwordModifyResponseValue.Children) == 1 {
if passwordModifyReponseValue.Children[0].Tag == 0 { if passwordModifyResponseValue.Children[0].Tag == 0 {
result.GeneratedPassword = ber.DecodeString(passwordModifyReponseValue.Children[0].Data.Bytes()) result.GeneratedPassword = ber.DecodeString(passwordModifyResponseValue.Children[0].Data.Bytes())
} }
} }
} }